id
stringlengths 14
16
| text
stringlengths 45
2.73k
| source
stringlengths 49
114
|
---|---|---|
03141ed809ad-1 | texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={} lookup_index=0
Here’s an example of passing metadata along with the documents, notice that it is split along with the documents.
metadatas = [{"document": 1}, {"document": 2}]
documents = text_splitter.create_documents([state_of_the_union, state_of_the_union], metadatas=metadatas)
print(documents[0]) | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html |
03141ed809ad-2 | print(documents[0])
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={'document': 1} lookup_index=0
previous
Getting Started
next
Hugging Face Length Function
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html |
c77aa7467a30-0 | .ipynb
.pdf
Python Code Text Splitter
Python Code Text Splitter#
PythonCodeTextSplitter splits text along python class and method definitions. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Python-specific separators. See the source code to see the Python syntax expected by default.
How the text is split: by list of python specific characters
How the chunk size is measured: by length function passed in (defaults to number of characters)
from langchain.text_splitter import PythonCodeTextSplitter
python_text = """
class Foo:
def bar():
def foo():
def testing_func():
def bar():
"""
python_splitter = PythonCodeTextSplitter(chunk_size=30, chunk_overlap=0)
docs = python_splitter.create_documents([python_text])
docs
[Document(page_content='Foo:\n\n def bar():', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='foo():\n\ndef testing_func():', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='bar():', lookup_str='', metadata={}, lookup_index=0)]
previous
NLTK Text Splitter
next
RecursiveCharacterTextSplitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/python.html |
b6749dd34351-0 | .ipynb
.pdf
Hugging Face Length Function
Hugging Face Length Function#
Most LLMs are constrained by the number of tokens that you can pass in, which is not the same as the number of characters. In order to get a more accurate estimate, we can use Hugging Face tokenizers to count the text length.
How the text is split: by character passed in
How the chunk size is measured: by Hugging Face tokenizer
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
previous
Character Text Splitter
next
Latex Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/huggingface_length_function.html |
78a27814a955-0 | .ipynb
.pdf
Markdown Text Splitter
Markdown Text Splitter#
MarkdownTextSplitter splits text along Markdown headings, code blocks, or horizontal rules. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Markdown-specific separators. See the source code to see the Markdown syntax expected by default.
How the text is split: by list of markdown specific characters
How the chunk size is measured: by length function passed in (defaults to number of characters)
from langchain.text_splitter import MarkdownTextSplitter
markdown_text = """
# 🦜️🔗 LangChain
⚡ Building applications with LLMs through composability ⚡
## Quick Install
```bash
# Hopefully this code block isn't split
pip install langchain
```
As an open source project in a rapidly developing field, we are extremely open to contributions.
"""
markdown_splitter = MarkdownTextSplitter(chunk_size=100, chunk_overlap=0)
docs = markdown_splitter.create_documents([markdown_text])
docs
[Document(page_content='# 🦜️🔗 LangChain\n\n⚡ Building applications with LLMs through composability ⚡', lookup_str='', metadata={}, lookup_index=0),
Document(page_content="Quick Install\n\n```bash\n# Hopefully this code block isn't split\npip install langchain", lookup_str='', metadata={}, lookup_index=0),
Document(page_content='As an open source project in a rapidly developing field, we are extremely open to contributions.', lookup_str='', metadata={}, lookup_index=0)]
previous
Latex Text Splitter
next
NLTK Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/markdown.html |
62c7daa5ac3d-0 | .ipynb
.pdf
tiktoken (OpenAI) Length Function
tiktoken (OpenAI) Length Function#
You can also use tiktoken, a open source tokenizer package from OpenAI to estimate tokens used. Will probably be more accurate for their models.
How the text is split: by character passed in
How the chunk size is measured: by tiktoken tokenizer
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter.from_tiktoken_encoder(chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.
Last year COVID-19 kept us apart. This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
previous
Spacy Text Splitter
next
TiktokenText Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken.html |
0a8aabd55fa2-0 | .ipynb
.pdf
NLTK Text Splitter
NLTK Text Splitter#
Rather than just splitting on “\n\n”, we can use NLTK to split based on tokenizers.
How the text is split: by NLTK
How the chunk size is measured: by length function passed in (defaults to number of characters)
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import NLTKTextSplitter
text_splitter = NLTKTextSplitter(chunk_size=1000)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.
Members of Congress and the Cabinet.
Justices of the Supreme Court.
My fellow Americans.
Last year COVID-19 kept us apart.
This year we are finally together again.
Tonight, we meet as Democrats Republicans and Independents.
But most importantly as Americans.
With a duty to one another to the American people to the Constitution.
And with an unwavering resolve that freedom will always triumph over tyranny.
Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways.
But he badly miscalculated.
He thought he could roll into Ukraine and the world would roll over.
Instead he met a wall of strength he never imagined.
He met the Ukrainian people.
From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
Groups of citizens blocking tanks with their bodies.
previous
Markdown Text Splitter
next
Python Code Text Splitter
By Harrison Chase | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html |
0a8aabd55fa2-1 | previous
Markdown Text Splitter
next
Python Code Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html |
ca7368ce820b-0 | .ipynb
.pdf
Latex Text Splitter
Latex Text Splitter#
LatexTextSplitter splits text along Latex headings, headlines, enumerations and more. It’s implemented as a simple subclass of RecursiveCharacterSplitter with Latex-specific separators. See the source code to see the Latex syntax expected by default.
How the text is split: by list of latex specific tags
How the chunk size is measured: by length function passed in (defaults to number of characters)
from langchain.text_splitter import LatexTextSplitter
latex_text = """
\documentclass{article}
\begin{document}
\maketitle
\section{Introduction}
Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.
\subsection{History of LLMs}
The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.
\subsection{Applications of LLMs}
LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.
\end{document}
"""
latex_splitter = LatexTextSplitter(chunk_size=400, chunk_overlap=0)
docs = latex_splitter.create_documents([latex_text])
docs | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/latex.html |
ca7368ce820b-1 | docs = latex_splitter.create_documents([latex_text])
docs
[Document(page_content='\\documentclass{article}\n\n\x08egin{document}\n\n\\maketitle', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='Introduction}\nLarge language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='History of LLMs}\nThe earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.', lookup_str='', metadata={}, lookup_index=0),
Document(page_content='Applications of LLMs}\nLLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.\n\n\\end{document}', lookup_str='', metadata={}, lookup_index=0)]
previous
Hugging Face Length Function
next
Markdown Text Splitter
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/latex.html |
a63a6c4a1e26-0 | .ipynb
.pdf
TiktokenText Splitter
TiktokenText Splitter#
How the text is split: by tiktoken tokens
How the chunk size is measured: by tiktoken tokens
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import TokenTextSplitter
text_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
Madam Speaker, Madam Vice President, our
previous
tiktoken (OpenAI) Length Function
next
Vectorstores
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken_splitter.html |
16ad10e8294d-0 | .ipynb
.pdf
SVM Retriever
Contents
Create New Retriever with Texts
Use Retriever
SVM Retriever#
This notebook goes over how to use a retriever that under the hood uses an SVM using scikit-learn.
Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb
from langchain.retrievers import SVMRetriever
from langchain.embeddings import OpenAIEmbeddings
# !pip install scikit-learn
Create New Retriever with Texts#
retriever = SVMRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings())
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='world', metadata={})]
previous
Pinecone Hybrid Search
next
TF-IDF Retriever
Contents
Create New Retriever with Texts
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/svm_retriever.html |
c01e52142850-0 | .ipynb
.pdf
ChatGPT Plugin Retriever
Contents
Create
Using the ChatGPT Retriever Plugin
ChatGPT Plugin Retriever#
This notebook shows how to use the ChatGPT Retriever Plugin within LangChain.
Create#
First, let’s go over how to create the ChatGPT Retriever Plugin.
To set up the ChatGPT Retriever Plugin, please follow instructions here.
You can also create the ChatGPT Retriever Plugin from LangChain document loaders. The below code walks through how to do that.
# STEP 1: Load
# Load documents using LangChain's DocumentLoaders
# This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.html
from langchain.document_loaders.csv_loader import CSVLoader
loader = CSVLoader(file_path='../../document_loaders/examples/example_data/mlb_teams_2012.csv')
data = loader.load()
# STEP 2: Convert
# Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-plugin
from typing import List
from langchain.docstore.document import Document
import json
def write_json(path: str, documents: List[Document])-> None:
results = [{"text": doc.page_content} for doc in documents]
with open(path, "w") as f:
json.dump(results, f, indent=2)
write_json("foo.json", data)
# STEP 3: Use
# Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_json
Using the ChatGPT Retriever Plugin#
Okay, so we’ve created the ChatGPT Retriever Plugin, but how do we actually use it?
The below code walks through how to do that. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin-retriever.html |
c01e52142850-1 | The below code walks through how to do that.
from langchain.retrievers import ChatGPTPluginRetriever
retriever = ChatGPTPluginRetriever(url="http://0.0.0.0:8000", bearer_token="foo")
retriever.get_relevant_documents("alice's phone number")
[Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0),
Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin-retriever.html |
c01e52142850-2 | Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score': 0.697888613}, lookup_index=0)]
previous
Retrievers
next
Contextual Compression Retriever
Contents
Create
Using the ChatGPT Retriever Plugin
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chatgpt-plugin-retriever.html |
9402d46116fa-0 | .ipynb
.pdf
Time Weighted VectorStore Retriever
Contents
Low Decay Rate
High Decay Rate
Time Weighted VectorStore Retriever#
This retriever uses a combination of semantic similarity and recency.
The algorithm for scoring them is:
semantic_similarity + (1.0 - decay_rate) ** hours_passed
Notably, hours_passed refers to the hours passed since the object in the retriever was last accessed, not since it was created. This means that frequently accessed objects remain “fresh.”
import faiss
from datetime import datetime, timedelta
from langchain.docstore import InMemoryDocstore
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers import TimeWeightedVectorStoreRetriever
from langchain.schema import Document
from langchain.vectorstores import FAISS
Low Decay Rate#
A low decay rate (in this, to be extreme, we will set close to 0) means memories will be “remembered” for longer. A decay rate of 0 means memories never be forgotten, making this retriever equivalent to the vector lookup.
# Define your embedding model
embeddings_model = OpenAIEmbeddings()
# Initialize the vectorstore as empty
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
retriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.0000000000000000000000001, k=1)
yesterday = datetime.now() - timedelta(days=1)
retriever.add_documents([Document(page_content="hello world", metadata={"last_accessed_at": yesterday})])
retriever.add_documents([Document(page_content="hello foo")]) | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html |
9402d46116fa-1 | retriever.add_documents([Document(page_content="hello foo")])
['5c9f7c06-c9eb-45f2-aea5-efce5fb9f2bd']
# "Hello World" is returned first because it is most salient, and the decay rate is close to 0., meaning it's still recent enough
retriever.get_relevant_documents("hello world")
[Document(page_content='hello world', metadata={'last_accessed_at': datetime.datetime(2023, 4, 16, 22, 9, 1, 966261), 'created_at': datetime.datetime(2023, 4, 16, 22, 9, 0, 374683), 'buffer_idx': 0})]
High Decay Rate#
With a high decay factor (e.g., several 9’s), the recency score quickly goes to 0! If you set this all the way to 1, recency is 0 for all objects, once again making this equivalent to a vector lookup.
# Define your embedding model
embeddings_model = OpenAIEmbeddings()
# Initialize the vectorstore as empty
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
retriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.999, k=1)
yesterday = datetime.now() - timedelta(days=1)
retriever.add_documents([Document(page_content="hello world", metadata={"last_accessed_at": yesterday})])
retriever.add_documents([Document(page_content="hello foo")])
['40011466-5bbe-4101-bfd1-e22e7f505de2'] | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html |
9402d46116fa-2 | # "Hello Foo" is returned first because "hello world" is mostly forgotten
retriever.get_relevant_documents("hello world")
[Document(page_content='hello foo', metadata={'last_accessed_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 494798), 'created_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 178722), 'buffer_idx': 1})]
previous
TF-IDF Retriever
next
VectorStore Retriever
Contents
Low Decay Rate
High Decay Rate
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html |
93e1ecf23937-0 | .ipynb
.pdf
Pinecone Hybrid Search
Contents
Setup Pinecone
Get embeddings and sparse encoders
Load Retriever
Add texts (if necessary)
Use Retriever
Pinecone Hybrid Search#
This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.
The logic of this retriever is taken from this documentaion
from langchain.retrievers import PineconeHybridSearchRetriever
Setup Pinecone#
You should only have to do this part once.
Note: it’s important to make sure that the “context” field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone’s docs.
import os
import pinecone
api_key = os.getenv("PINECONE_API_KEY") or "PINECONE_API_KEY"
# find environment next to your API key in the Pinecone console
env = os.getenv("PINECONE_ENVIRONMENT") or "PINECONE_ENVIRONMENT"
index_name = "langchain-pinecone-hybrid-search"
pinecone.init(api_key=api_key, enviroment=env)
pinecone.whoami()
WhoAmIResponse(username='load', user_label='label', projectname='load-test')
# create the index
pinecone.create_index(
name = index_name,
dimension = 1536, # dimensionality of dense model
metric = "dotproduct", # sparse values supported only for dotproduct
pod_type = "s1",
metadata_config={"indexed": []} # see explaination above
)
Now that its created, we can use it
index = pinecone.Index(index_name)
Get embeddings and sparse encoders# | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html |
93e1ecf23937-1 | index = pinecone.Index(index_name)
Get embeddings and sparse encoders#
Embeddings are used for the dense vectors, tokenizer is used for the sparse vector
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25.
For more information about the sparse encoders you can checkout pinecone-text library docs.
from pinecone_text.sparse import BM25Encoder
# or from pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE
# use default tf-idf values
bm25_encoder = BM25Encoder().default()
The above code is using default tfids values. It’s highly recommended to fit the tf-idf values to your own corpus. You can do it as follow:
corpus = ["foo", "bar", "world", "hello"]
# fit tf-idf values on your corpus
bm25_encoder.fit(corpus)
# store the values to a json file
bm25_encoder.dump("bm25_values.json")
# load to your BM25Encoder object
bm25_encoder = BM25Encoder().load("bm25_values.json")
Load Retriever#
We can now construct the retriever!
retriever = PineconeHybridSearchRetriever(embeddings=embeddings, sparse_encoder=bm25_encoder, index=index)
Add texts (if necessary)#
We can optionally add texts to the retriever (if they aren’t already in there)
retriever.add_texts(["foo", "bar", "world", "hello"])
100%|██████████| 1/1 [00:02<00:00, 2.27s/it]
Use Retriever#
We can now use the retriever! | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html |
93e1ecf23937-2 | Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result[0]
Document(page_content='foo', metadata={})
previous
Metal
next
SVM Retriever
Contents
Setup Pinecone
Get embeddings and sparse encoders
Load Retriever
Add texts (if necessary)
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html |
452340011c74-0 | .ipynb
.pdf
Metal
Contents
Ingest Documents
Query
Metal#
This notebook shows how to use Metal’s retriever.
First, you will need to sign up for Metal and get an API key. You can do so here
# !pip install metal_sdk
from metal_sdk.metal import Metal
API_KEY = ""
CLIENT_ID = ""
INDEX_ID = ""
metal = Metal(API_KEY, CLIENT_ID, INDEX_ID);
Ingest Documents#
You only need to do this if you haven’t already set up an index
metal.index( {"text": "foo1"})
metal.index( {"text": "foo"})
{'data': {'id': '642739aa7559b026b4430e42',
'text': 'foo',
'createdAt': '2023-03-31T19:51:06.748Z'}}
Query#
Now that our index is set up, we can set up a retriever and start querying it.
from langchain.retrievers import MetalRetriever
retriever = MetalRetriever(metal, params={"limit": 2})
retriever.get_relevant_documents("foo1")
[Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}),
Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})]
previous
ElasticSearch BM25
next
Pinecone Hybrid Search
Contents | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html |
452340011c74-1 | previous
ElasticSearch BM25
next
Pinecone Hybrid Search
Contents
Ingest Documents
Query
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/metal.html |
618823aaadeb-0 | .ipynb
.pdf
ElasticSearch BM25
Contents
Create New Retriever
Add texts (if necessary)
Use Retriever
ElasticSearch BM25#
This notebook goes over how to use a retriever that under the hood uses ElasticSearcha and BM25.
For more information on the details of BM25 see this blog post.
from langchain.retrievers import ElasticSearchBM25Retriever
Create New Retriever#
elasticsearch_url="http://localhost:9200"
retriever = ElasticSearchBM25Retriever.create(elasticsearch_url, "langchain-index-4")
# Alternatively, you can load an existing index
# import elasticsearch
# elasticsearch_url="http://localhost:9200"
# retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), "langchain-index")
Add texts (if necessary)#
We can optionally add texts to the retriever (if they aren’t already in there)
retriever.add_texts(["foo", "bar", "world", "hello", "foo bar"])
['cbd4cb47-8d9f-4f34-b80e-ea871bc49856',
'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365',
'8631bfc8-7c12-48ee-ab56-8ad5f373676e',
'8be8374c-3253-4d87-928d-d73550a2ecf0',
'd79f457b-2842-4eab-ae10-77aa420b53d7']
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html |
618823aaadeb-1 | result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={})]
previous
Databerry
next
Metal
Contents
Create New Retriever
Add texts (if necessary)
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html |
79d37308080c-0 | .ipynb
.pdf
VectorStore Retriever
VectorStore Retriever#
The index - and therefore the retriever - that LangChain has the most support for is a VectorStoreRetriever. As the name suggests, this retriever is backed heavily by a VectorStore.
Once you construct a VectorStore, its very easy to construct a retriever. Let’s walk through an example.
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(texts, embeddings)
Exiting: Cleaning up .chroma directory
retriever = db.as_retriever()
docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson")
By default, the vectorstore retriever uses similarity search. If the underlying vectorstore support maximum marginal relevance search, you can specify that as the search type.
retriever = db.as_retriever(search_type="mmr")
docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson")
You can also specify search kwargs like k to use when doing retrieval.
retriever = db.as_retriever(search_kwargs={"k": 1})
docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson")
len(docs)
1
previous
Time Weighted VectorStore Retriever
next
Weaviate Hybrid Search
By Harrison Chase | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore-retriever.html |
79d37308080c-1 | next
Weaviate Hybrid Search
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore-retriever.html |
2e39f768c94c-0 | .ipynb
.pdf
Contextual Compression Retriever
Contents
Contextual Compression Retriever
Using a vanilla vector store retriever
Adding contextual compression with an LLMChainExtractor
More built-in compressors: filters
LLMChainFilter
EmbeddingsFilter
Stringing compressors and document transformers together
Contextual Compression Retriever#
This notebook introduces the concept of DocumentCompressors and the ContextualCompressionRetriever. The core idea is simple: given a specific query, we should be able to return only the documents relevant to that query, and only the parts of those documents that are relevant. The ContextualCompressionsRetriever is a wrapper for another retriever that iterates over the initial output of the base retriever and filters and compresses those initial documents, so that only the most relevant information is returned.
# Helper function for printing docs
def pretty_print_docs(docs):
print(f"\n{'-' * 100}\n".join([f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)]))
Using a vanilla vector store retriever#
Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can see that given an example question our retriever returns one or two relevant docs and a few irrelevant docs. And even the relevant docs have a lot of irrelevant information in them.
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import TextLoader
from langchain.vectorstores import FAISS
documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents) | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
2e39f768c94c-1 | texts = text_splitter.split_documents(documents)
retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()
docs = retriever.get_relevant_documents("What did the president say about Ketanji Brown Jackson")
pretty_print_docs(docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
2e39f768c94c-2 | We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
----------------------------------------------------------------------------------------------------
Document 3:
And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong.
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things.
So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together.
First, beat the opioid epidemic.
----------------------------------------------------------------------------------------------------
Document 4:
Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers.
And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up.
That ends on my watch.
Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect.
We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees.
Let’s pass the Paycheck Fairness Act and paid leave.
Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
2e39f768c94c-3 | Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.
Adding contextual compression with an LLMChainExtractor#
Now let’s wrap our base retriever with a ContextualCompressionRetriever. We’ll add an LLMChainExtractor, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query.
from langchain.llms import OpenAI
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor
llm = OpenAI(temperature=0)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence."
----------------------------------------------------------------------------------------------------
Document 2:
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
More built-in compressors: filters#
LLMChainFilter# | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
2e39f768c94c-4 | More built-in compressors: filters#
LLMChainFilter#
The LLMChainFilter is slightly simpler but more robust compressor that uses an LLM chain to decide which of the initially retrieved documents to filter out and which ones to return, without manipulating the document contents.
from langchain.retrievers.document_compressors import LLMChainFilter
_filter = LLMChainFilter.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressor=_filter, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
EmbeddingsFilter#
Making an extra LLM call over each retrieved document is expensive and slow. The EmbeddingsFilter provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query.
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers.document_compressors import EmbeddingsFilter | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
2e39f768c94c-5 | from langchain.retrievers.document_compressors import EmbeddingsFilter
embeddings = OpenAIEmbeddings()
embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)
compression_retriever = ContextualCompressionRetriever(base_compressor=embeddings_filter, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
----------------------------------------------------------------------------------------------------
Document 2:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
2e39f768c94c-6 | We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
----------------------------------------------------------------------------------------------------
Document 3:
And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong.
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.
And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things.
So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together.
First, beat the opioid epidemic.
Stringing compressors and document transformers together#
Using the DocumentCompressorPipeline we can also easily combine multiple compressors in sequence. Along with compressors we can add BaseDocumentTransformers to our pipeline, which don’t perform any contextual compression but simply perform some transformation on a set of documents. For example TextSplitters can be used as document transformers to split documents into smaller pieces, and the EmbeddingsRedundantFilter can be used to filter out redundant documents based on embedding similarity between documents. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
2e39f768c94c-7 | Below we create a compressor pipeline by first splitting our docs into smaller chunks, then removing redundant documents, and then filtering based on relevance to the query.
from langchain.document_transformers import EmbeddingsRedundantFilter
from langchain.retrievers.document_compressors import DocumentCompressorPipeline
from langchain.text_splitter import CharacterTextSplitter
splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=". ")
redundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings)
relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)
pipeline_compressor = DocumentCompressorPipeline(
transformers=[splitter, redundant_filter, relevant_filter]
)
compression_retriever = ContextualCompressionRetriever(base_compressor=pipeline_compressor, base_retriever=retriever)
compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown")
pretty_print_docs(compressed_docs)
Document 1:
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson
----------------------------------------------------------------------------------------------------
Document 2:
As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential.
While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year
----------------------------------------------------------------------------------------------------
Document 3:
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder
previous
ChatGPT Plugin Retriever
next
Databerry
Contents | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
2e39f768c94c-8 | previous
ChatGPT Plugin Retriever
next
Databerry
Contents
Contextual Compression Retriever
Using a vanilla vector store retriever
Adding contextual compression with an LLMChainExtractor
More built-in compressors: filters
LLMChainFilter
EmbeddingsFilter
Stringing compressors and document transformers together
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html |
a52b32003630-0 | .ipynb
.pdf
Weaviate Hybrid Search
Weaviate Hybrid Search#
This notebook shows how to use Weaviate hybrid search as a LangChain retriever.
import weaviate
import os
WEAVIATE_URL = "..."
client = weaviate.Client(
url=WEAVIATE_URL,
)
from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever
from langchain.schema import Document
retriever = WeaviateHybridSearchRetriever(client, index_name="LangChain", text_key="text")
docs = [Document(page_content="foo")]
retriever.add_documents(docs)
['3f79d151-fb84-44cf-85e0-8682bfe145e0']
retriever.get_relevant_documents("foo")
[Document(page_content='foo', metadata={})]
previous
VectorStore Retriever
next
Memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/weaviate-hybrid.html |
dbff85cd8a3f-0 | .ipynb
.pdf
Databerry
Contents
Query
Databerry#
This notebook shows how to use Databerry’s retriever.
First, you will need to sign up for Databerry, create a datastore, add some data and get your datastore api endpoint url
Query#
Now that our index is set up, we can set up a retriever and start querying it.
from langchain.retrievers import DataberryRetriever
retriever = DataberryRetriever(
datastore_url="https://clg1xg2h80000l708dymr0fxc.databerry.ai/query",
# api_key="DATABERRY_API_KEY", # optional if datastore is public
# top_k=10 # optional
)
retriever.get_relevant_documents("What is Daftpage?")
[Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html |
dbff85cd8a3f-1 | Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}), | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html |
dbff85cd8a3f-2 | Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})]
previous
Contextual Compression Retriever
next
ElasticSearch BM25
Contents
Query
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html |
9dc9e60c58d1-0 | .ipynb
.pdf
TF-IDF Retriever
Contents
Create New Retriever with Texts
Use Retriever
TF-IDF Retriever#
This notebook goes over how to use a retriever that under the hood uses TF-IDF using scikit-learn.
For more information on the details of TF-IDF see this blog post.
from langchain.retrievers import TFIDFRetriever
# !pip install scikit-learn
Create New Retriever with Texts#
retriever = TFIDFRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"])
Use Retriever#
We can now use the retriever!
result = retriever.get_relevant_documents("foo")
result
[Document(page_content='foo', metadata={}),
Document(page_content='foo bar', metadata={}),
Document(page_content='hello', metadata={}),
Document(page_content='world', metadata={})]
previous
SVM Retriever
next
Time Weighted VectorStore Retriever
Contents
Create New Retriever with Texts
Use Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/tf_idf_retriever.html |
bc92aa327c43-0 | .ipynb
.pdf
Tracing Walkthrough
Tracing Walkthrough#
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
## Uncomment this if using hosted setup.
# os.environ["LANGCHAIN_ENDPOINT"] = "https://langchain-api-gateway-57eoxz8z.uc.gateway.dev"
## Uncomment this if you want traces to be recorded to "my_session" instead of default.
# os.environ["LANGCHAIN_SESSION"] = "my_session"
## Better to set this environment variable in the terminal
## Uncomment this if using hosted version. Replace "my_api_key" with your actual API Key.
# os.environ["LANGCHAIN_API_KEY"] = "my_api_key"
import langchain
from langchain.agents import Tool, initialize_agent, load_tools
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# Agent run with tracing. Ensure that OPENAI_API_KEY is set appropriately to run this example.
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
agent.run("What is 2 raised to .123243 power?")
> Entering new AgentExecutor chain...
I need to use a calculator to solve this.
Action: Calculator
Action Input: 2^.123243
Observation: Answer: 1.0891804557407723
Thought: I now know the final answer.
Final Answer: 1.0891804557407723
> Finished chain.
'1.0891804557407723'
# Agent run with tracing using a chat model
agent = initialize_agent( | https://python.langchain.com/en/latest/tracing/agent_with_tracing.html |
bc92aa327c43-1 | # Agent run with tracing using a chat model
agent = initialize_agent(
tools, ChatOpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
agent.run("What is 2 raised to .123243 power?")
> Entering new AgentExecutor chain...
Question: What is 2 raised to .123243 power?
Thought: I need a calculator to solve this problem.
Action:
```
{
"action": "calculator",
"action_input": "2^0.123243"
}
```
Observation: calculator is not a valid tool, try another one.
I made a mistake, I need to use the correct tool for this question.
Action:
```
{
"action": "calculator",
"action_input": "2^0.123243"
}
```
Observation: calculator is not a valid tool, try another one.
I made a mistake, the tool name is actually "calc" instead of "calculator".
Action:
```
{
"action": "calc",
"action_input": "2^0.123243"
}
```
Observation: calc is not a valid tool, try another one.
I made another mistake, the tool name is actually "Calculator" instead of "calc".
Action:
```
{
"action": "Calculator",
"action_input": "2^0.123243"
}
```
Observation: Answer: 1.0891804557407723
Thought:The final answer is 1.0891804557407723.
Final Answer: 1.0891804557407723
> Finished chain.
'1.0891804557407723'
By Harrison Chase | https://python.langchain.com/en/latest/tracing/agent_with_tracing.html |
bc92aa327c43-2 | '1.0891804557407723'
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/tracing/agent_with_tracing.html |
11375794a231-0 | .md
.pdf
Locally Hosted Setup
Contents
Installation
Environment Setup
Locally Hosted Setup#
This page contains instructions for installing and then setting up the environment to use the locally hosted version of tracing.
Installation#
Ensure you have Docker installed (see Get Docker) and that it’s running.
Install the latest version of langchain: pip install langchain or pip install langchain -U to upgrade your
existing version.
Run langchain-server. This command was installed automatically when you ran the above command (pip install langchain).
This will spin up the server in the terminal, hosted on port 4137 by default.
Once you see the terminal
output langchain-langchain-frontend-1 | ➜ Local: [http://localhost:4173/](http://localhost:4173/), navigate
to http://localhost:4173/
You should see a page with your tracing sessions. See the overview page for a walkthrough of the UI.
Currently, trace data is not guaranteed to be persisted between runs of langchain-server. If you want to
persist your data, you can mount a volume to the Docker container. See the Docker docs for more info.
To stop the server, press Ctrl+C in the terminal where you ran langchain-server.
Environment Setup#
After installation, you must now set up your environment to use tracing.
This can be done by setting an environment variable in your terminal by running export LANGCHAIN_HANDLER=langchain.
You can also do this by adding the below snippet to the top of every script. IMPORTANT: this must go at the VERY TOP of your script, before you import anything from langchain.
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Contents
Installation
Environment Setup
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://python.langchain.com/en/latest/tracing/local_installation.html |
11375794a231-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/tracing/local_installation.html |
b75c39bbfe5f-0 | .md
.pdf
Cloud Hosted Setup
Contents
Installation
Environment Setup
Cloud Hosted Setup#
We offer a hosted version of tracing at langchainplus.vercel.app. You can use this to view traces from your run without having to run the server locally.
Note: we are currently only offering this to a limited number of users. The hosted platform is VERY alpha, in active development, and data might be dropped at any time. Don’t depend on data being persisted in the system long term and don’t log traces that may contain sensitive information. If you’re interested in using the hosted platform, please fill out the form here.
Installation#
Login to the system and click “API Key” in the top right corner. Generate a new key and keep it safe. You will need it to authenticate with the system.
Environment Setup#
After installation, you must now set up your environment to use tracing.
This can be done by setting an environment variable in your terminal by running export LANGCHAIN_HANDLER=langchain.
You can also do this by adding the below snippet to the top of every script. IMPORTANT: this must go at the VERY TOP of your script, before you import anything from langchain.
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
You will also need to set an environment variable to specify the endpoint and your API key. This can be done with the following environment variables:
LANGCHAIN_ENDPOINT = “https://langchain-api-gateway-57eoxz8z.uc.gateway.dev”
LANGCHAIN_API_KEY - set this to the API key you generated during installation.
An example of adding all relevant environment variables is below:
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
os.environ["LANGCHAIN_ENDPOINT"] = "https://langchain-api-gateway-57eoxz8z.uc.gateway.dev" | https://python.langchain.com/en/latest/tracing/hosted_installation.html |
b75c39bbfe5f-1 | os.environ["LANGCHAIN_API_KEY"] = "my_api_key" # Don't commit this to your repo! Better to set it in your terminal.
Contents
Installation
Environment Setup
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/tracing/hosted_installation.html |
8e031be02962-0 | .md
.pdf
Question Answering over Docs
Contents
Document Question Answering
Adding in sources
Additional Related Resources
End-to-end examples
Question Answering over Docs#
Conceptual Guide
Question answering in this context refers to question answering over your document data.
For question answering over other types of data, please see other sources documentation like SQL database Question Answering or Interacting with APIs.
For question answering over many documents, you almost always want to create an index over the data.
This can be used to smartly access the most relevant documents for a given question, allowing you to avoid having to pass all the documents to the LLM (saving you time and money).
See this notebook for a more detailed introduction to this, but for a super quick start the steps involved are:
Load Your Documents
from langchain.document_loaders import TextLoader
loader = TextLoader('../state_of_the_union.txt')
See here for more information on how to get started with document loading.
Create Your Index
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
The best and most popular index by far at the moment is the VectorStore index.
Query Your Index
query = "What did the president say about Ketanji Brown Jackson"
index.query(query)
Alternatively, use query_with_sources to also get back the sources involved
query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query)
Again, these high level interfaces obfuscate a lot of what is going on under the hood, so please see this notebook for a lower level walkthrough.
Document Question Answering#
Question answering involves fetching multiple documents, and then asking a question of them.
The LLM response will contain the answer to your question, based on the content of the documents. | https://python.langchain.com/en/latest/use_cases/question_answering.html |
8e031be02962-1 | The LLM response will contain the answer to your question, based on the content of the documents.
The recommended way to get started using a question answering chain is:
from langchain.chains.question_answering import load_qa_chain
chain = load_qa_chain(llm, chain_type="stuff")
chain.run(input_documents=docs, question=query)
The following resources exist:
Question Answering Notebook: A notebook walking through how to accomplish this task.
VectorDB Question Answering Notebook: A notebook walking through how to do question answering over a vector database. This can often be useful for when you have a LOT of documents, and you don’t want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
Adding in sources#
There is also a variant of this, where in addition to responding with the answer the language model will also cite its sources (eg which of the documents passed in it used).
The recommended way to get started using a question answering with sources chain is:
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
chain = load_qa_with_sources_chain(llm, chain_type="stuff")
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
The following resources exist:
QA With Sources Notebook: A notebook walking through how to accomplish this task.
VectorDB QA With Sources Notebook: A notebook walking through how to do question answering with sources over a vector database. This can often be useful for when you have a LOT of documents, and you don’t want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
Additional Related Resources#
Additional related resources include: | https://python.langchain.com/en/latest/use_cases/question_answering.html |
8e031be02962-2 | Additional Related Resources#
Additional related resources include:
Utilities for working with Documents: Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents) and Embeddings & Vectorstores (useful for the above Vector DB example).
CombineDocuments Chains: A conceptual overview of specific types of chains by which you can accomplish this task.
End-to-end examples#
For examples to this done in an end-to-end manner, please see the following resources:
Semantic search over a group chat with Sources Notebook: A notebook that semantically searches over a group chat conversation.
previous
Agent Simulations
next
Chatbots
Contents
Document Question Answering
Adding in sources
Additional Related Resources
End-to-end examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/question_answering.html |
50f9f53c359b-0 | .md
.pdf
Querying Tabular Data
Contents
Document Loading
Querying
Chains
Agents
Querying Tabular Data#
Conceptual Guide
Lots of data and information is stored in tabular data, whether it be csvs, excel sheets, or SQL tables.
This page covers all resources available in LangChain for working with data in this format.
Document Loading#
If you have text data stored in a tabular format, you may want to load the data into a Document and then index it as you would
other text/unstructured data. For this, you should use a document loader like the CSVLoader
and then you should create an index over that data, and query it that way.
Querying#
If you have more numeric tabular data, or have a large amount of data and don’t want to index it, you should get started
by looking at various chains and agents we have for dealing with this data.
Chains#
If you are just getting started, and you have relatively small/simple tabular data, you should get started with chains.
Chains are a sequence of predetermined steps, so they are good to get started with as they give you more control and let you
understand what is happening better.
SQL Database Chain
Agents#
Agents are more complex, and involve multiple queries to the LLM to understand what to do.
The downside of agents are that you have less control. The upside is that they are more powerful,
which allows you to use them on larger databases and more complex schemas.
SQL Agent
Pandas Agent
CSV Agent
previous
Chatbots
next
Code Understanding
Contents
Document Loading
Querying
Chains
Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/tabular.html |
31049ed74caf-0 | .md
.pdf
Interacting with APIs
Contents
Chains
Agents
Interacting with APIs#
Conceptual Guide
Lots of data and information is stored behind APIs.
This page covers all resources available in LangChain for working with APIs.
Chains#
If you are just getting started, and you have relatively simple apis, you should get started with chains.
Chains are a sequence of predetermined steps, so they are good to get started with as they give you more control and let you
understand what is happening better.
API Chain
Agents#
Agents are more complex, and involve multiple queries to the LLM to understand what to do.
The downside of agents are that you have less control. The upside is that they are more powerful,
which allows you to use them on larger and more complex schemas.
OpenAPI Agent
previous
Code Understanding
next
Summarization
Contents
Chains
Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/apis.html |
357e2633753a-0 | .md
.pdf
Code Understanding
Contents
Conversational Retriever Chain
Code Understanding#
Overview
LangChain is a useful tool designed to parse GitHub code repositories. By leveraging VectorStores, Conversational RetrieverChain, and GPT-4, it can answer questions in the context of an entire GitHub repository or generate new code. This documentation page outlines the essential components of the system and guides using LangChain for better code comprehension, contextual question answering, and code generation in GitHub repositories.
Conversational Retriever Chain#
Conversational RetrieverChain is a retrieval-focused system that interacts with the data stored in a VectorStore. Utilizing advanced techniques, like context-aware filtering and ranking, it retrieves the most relevant code snippets and information for a given user query. Conversational RetrieverChain is engineered to deliver high-quality, pertinent results while considering conversation history and context.
LangChain Workflow for Code Understanding and Generation
Index the code base: Clone the target repository, load all files within, chunk the files, and execute the indexing process. Optionally, you can skip this step and use an already indexed dataset.
Embedding and Code Store: Code snippets are embedded using a code-aware embedding model and stored in a VectorStore.
Query Understanding: GPT-4 processes user queries, grasping the context and extracting relevant details.
Construct the Retriever: Conversational RetrieverChain searches the VectorStore to identify the most relevant code snippets for a given query.
Build the Conversational Chain: Customize the retriever settings and define any user-defined filters as needed.
Ask questions: Define a list of questions to ask about the codebase, and then use the ConversationalRetrievalChain to generate context-aware answers. The LLM (GPT-4) generates comprehensive, context-aware answers based on retrieved code snippets and conversation history.
The full tutorial is available below. | https://python.langchain.com/en/latest/use_cases/code.html |
357e2633753a-1 | The full tutorial is available below.
Twitter the-algorithm codebase analysis with Deep Lake: A notebook walking through how to parse github source code and run queries conversation.
LangChain codebase analysis with Deep Lake: A notebook walking through how to analyze and do question answering over THIS code base.
previous
Querying Tabular Data
next
Interacting with APIs
Contents
Conversational Retriever Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/code.html |
695fdc628604-0 | .md
.pdf
Agent Simulations
Contents
CAMEL
Generative Agents
Agent Simulations#
Agent simulations involve interacting one of more agents with eachother.
Agent simulations generally involve two main components:
Long Term Memory
Simulation Environment
Specific implementations of agent simulations (or parts of agent simulations) include
CAMEL#
CAMEL: an implementation of the CAMEL (Communicative Agents for “Mind” Exploration of Large Scale Language Model Society) paper, where two agents communicate with eachother.
Generative Agents#
Generative Agents: This notebook implements a generative agent based on the paper Generative Agents: Interactive Simulacra of Human Behavior by Park, et. al.
previous
Autonomous Agents
next
Question Answering over Docs
Contents
CAMEL
Generative Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/agent_simulations.html |
5a3a54fa2858-0 | .md
.pdf
Summarization
Summarization#
Conceptual Guide
Summarization involves creating a smaller summary of multiple longer documents.
This can be useful for distilling long documents into the core pieces of information.
The recommended way to get started using a summarization chain is:
from langchain.chains.summarize import load_summarize_chain
chain = load_summarize_chain(llm, chain_type="map_reduce")
chain.run(docs)
The following resources exist:
Summarization Notebook: A notebook walking through how to accomplish this task.
Additional related resources include:
Utilities for working with Documents: Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents).
previous
Interacting with APIs
next
Extraction
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/summarization.html |
03a169567796-0 | .md
.pdf
Extraction
Extraction#
Conceptual Guide
Most APIs and databases still deal with structured information.
Therefore, in order to better work with those, it can be useful to extract structured information from text.
Examples of this include:
Extracting a structured row to insert into a database from a sentence
Extracting multiple rows to insert into a database from a long document
Extracting the correct API parameters from a user query
This work is extremely related to output parsing.
Output parsers are responsible for instructing the LLM to respond in a specific format.
In this case, the output parsers specify the format of the data you would like to extract from the document.
Then, in addition to the output format instructions, the prompt should also contain the data you would like to extract information from.
While normal output parsers are good enough for basic structuring of response data,
when doing extraction you often want to extract more complicated or nested structures.
For a deep dive on extraction, we recommend checking out kor,
a library that uses the existing LangChain chain and OutputParser abstractions
but deep dives on allowing extraction of more complicated schemas.
previous
Summarization
next
Evaluation
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/extraction.html |
d141533a372c-0 | .rst
.pdf
Evaluation
Contents
The Problem
The Solution
The Examples
Other Examples
Evaluation#
Note
Conceptual Guide
This section of documentation covers how we approach and think about evaluation in LangChain.
Both evaluation of internal chains/agents, but also how we would recommend people building on top of LangChain approach evaluation.
The Problem#
It can be really hard to evaluate LangChain chains and agents.
There are two main reasons for this:
# 1: Lack of data
You generally don’t have a ton of data to evaluate your chains/agents over before starting a project.
This is usually because Large Language Models (the core of most chains/agents) are terrific few-shot and zero shot learners,
meaning you are almost always able to get started on a particular task (text-to-SQL, question answering, etc) without
a large dataset of examples.
This is in stark contrast to traditional machine learning where you had to first collect a bunch of datapoints
before even getting started using a model.
# 2: Lack of metrics
Most chains/agents are performing tasks for which there are not very good metrics to evaluate performance.
For example, one of the most common use cases is generating text of some form.
Evaluating generated text is much more complicated than evaluating a classification prediction, or a numeric prediction.
The Solution#
LangChain attempts to tackle both of those issues.
What we have so far are initial passes at solutions - we do not think we have a perfect solution.
So we very much welcome feedback, contributions, integrations, and thoughts on this.
Here is what we have for each problem so far:
# 1: Lack of data
We have started LangChainDatasets a Community space on Hugging Face.
We intend this to be a collection of open source datasets for evaluating common chains and agents. | https://python.langchain.com/en/latest/use_cases/evaluation.html |
d141533a372c-1 | We intend this to be a collection of open source datasets for evaluating common chains and agents.
We have contributed five datasets of our own to start, but we highly intend this to be a community effort.
In order to contribute a dataset, you simply need to join the community and then you will be able to upload datasets.
We’re also aiming to make it as easy as possible for people to create their own datasets.
As a first pass at this, we’ve added a QAGenerationChain, which given a document comes up
with question-answer pairs that can be used to evaluate question-answering tasks over that document down the line.
See this notebook for an example of how to use this chain.
# 2: Lack of metrics
We have two solutions to the lack of metrics.
The first solution is to use no metrics, and rather just rely on looking at results by eye to get a sense for how the chain/agent is performing.
To assist in this, we have developed (and will continue to develop) tracing, a UI-based visualizer of your chain and agent runs.
The second solution we recommend is to use Language Models themselves to evaluate outputs.
For this we have a few different chains and prompts aimed at tackling this issue.
The Examples#
We have created a bunch of examples combining the above two solutions to show how we internally evaluate chains and agents when we are developing.
In addition to the examples we’ve curated, we also highly welcome contributions here.
To facilitate that, we’ve included a template notebook for community members to use to build their own examples.
The existing examples we have are:
Question Answering (State of Union): A notebook showing evaluation of a question-answering task over a State-of-the-Union address.
Question Answering (Paul Graham Essay): A notebook showing evaluation of a question-answering task over a Paul Graham essay. | https://python.langchain.com/en/latest/use_cases/evaluation.html |
d141533a372c-2 | SQL Question Answering (Chinook): A notebook showing evaluation of a question-answering task over a SQL database (the Chinook database).
Agent Vectorstore: A notebook showing evaluation of an agent doing question answering while routing between two different vector databases.
Agent Search + Calculator: A notebook showing evaluation of an agent doing question answering using a Search engine and a Calculator as tools.
Evaluating an OpenAPI Chain: A notebook showing evaluation of an OpenAPI chain, including how to generate test data if you don’t have any.
Other Examples#
In addition, we also have some more generic resources for evaluation.
Question Answering: An overview of LLMs aimed at evaluating question answering systems in general.
Data Augmented Question Answering: An end-to-end example of evaluating a question answering system focused on a specific document (a RetrievalQAChain to be precise). This example highlights how to use LLMs to come up with question/answer examples to evaluate over, and then highlights how to use LLMs to evaluate performance on those generated examples.
Hugging Face Datasets: Covers an example of loading and using a dataset from Hugging Face for evaluation.
previous
Extraction
next
Agent Benchmarking: Search + Calculator
Contents
The Problem
The Solution
The Examples
Other Examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation.html |
347c1ea07421-0 | .md
.pdf
Autonomous Agents
Contents
Baby AGI (Original Repo)
AutoGPT (Original Repo)
Autonomous Agents#
Autonomous Agents are agents that designed to be more long running.
You give them one or multiple long term goals, and they independently execute towards those goals.
The applications combine tool usage and long term memory.
At the moment, Autonomous Agents are fairly experimental and based off of other open-source projects.
By implementing these open source projects in LangChain primitives we can get the benefits of LangChain -
easy switching an experimenting with multiple LLMs, usage of different vectorstores as memory,
usage of LangChain’s collection of tools.
Baby AGI (Original Repo)#
Baby AGI: a notebook implementing BabyAGI as LLM Chains
Baby AGI with Tools: building off the above notebook, this example substitutes in an agent with tools as the execution tools, allowing it to actually take actions.
AutoGPT (Original Repo)#
AutoGPT: a notebook implementing AutoGPT in LangChain primitives
WebSearch Research Assistant: a notebook showing how to use AutoGPT plus specific tools to act as research assistant that can use the web.
previous
Personal Assistants (Agents)
next
Agent Simulations
Contents
Baby AGI (Original Repo)
AutoGPT (Original Repo)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/autonomous_agents.html |
2d0e39a30675-0 | .md
.pdf
Chatbots
Chatbots#
Conceptual Guide
Since language models are good at producing text, that makes them ideal for creating chatbots.
Aside from the base prompts/LLMs, an important concept to know for Chatbots is memory.
Most chat based applications rely on remembering what happened in previous interactions, which memory is designed to help with.
The following resources exist:
ChatGPT Clone: A notebook walking through how to recreate a ChatGPT-like experience with LangChain.
Conversation Memory: A notebook walking through how to use different types of conversational memory.
Conversation Agent: A notebook walking through how to create an agent optimized for conversation.
Additional related resources include:
Memory Key Concepts: Explanation of key concepts related to memory.
Memory Examples: A collection of how-to examples for working with memory.
previous
Question Answering over Docs
next
Querying Tabular Data
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/chatbots.html |
5e9d6edd118a-0 | .md
.pdf
Personal Assistants (Agents)
Personal Assistants (Agents)#
Conceptual Guide
We use “personal assistant” here in a very broad sense.
Personal assistants have a few characteristics:
They can interact with the outside world
They have knowledge of your data
They remember your interactions
Really all of the functionality in LangChain is relevant for building a personal assistant.
Highlighting specific parts:
Agent Documentation (for interacting with the outside world)
Index Documentation (for giving them knowledge of your data)
Memory (for helping them remember interactions)
Specific examples of this include:
AI Plugins: an implementation of an agent that is designed to be able to use all AI Plugins.
Wikibase Agent: an implementation of an agent that is designed to interact with Wikibase.
previous
How to add SharedMemory to an Agent and its Tools
next
Autonomous Agents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/personal_assistants.html |
54757141debe-0 | .ipynb
.pdf
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Deep Lake
Contents
1. Index the code base (optional)
2. Question Answering on Twitter algorithm codebase
Analysis of Twitter the-algorithm source code with LangChain, GPT4 and Deep Lake#
In this tutorial, we are going to use Langchain + Deep Lake with GPT4 to analyze the code base of the twitter algorithm.
!python3 -m pip install --upgrade langchain deeplake openai tiktoken
Define OpenAI embeddings, Deep Lake multi-modal vector store api and authenticate. For full documentation of Deep Lake please follow docs and API reference.
Authenticate into Deep Lake if you want to create your own dataset and publish it. You can get an API key from the platform
import os
import getpass
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import DeepLake
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:')
embeddings = OpenAIEmbeddings()
1. Index the code base (optional)#
You can directly skip this part and directly jump into using already indexed dataset. To begin with, first we will clone the repository, then parse and chunk the code base and use OpenAI indexing.
!git clone https://github.com/twitter/the-algorithm # replace any repository of your choice
Load all files inside the repository
import os
from langchain.document_loaders import TextLoader
root_dir = './the-algorithm'
docs = []
for dirpath, dirnames, filenames in os.walk(root_dir):
for file in filenames:
try:
loader = TextLoader(os.path.join(dirpath, file), encoding='utf-8') | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
54757141debe-1 | loader = TextLoader(os.path.join(dirpath, file), encoding='utf-8')
docs.extend(loader.load_and_split())
except Exception as e:
pass
Then, chunk the files
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
Execute the indexing. This will take about ~4 mins to compute embeddings and upload to Activeloop. You can then publish the dataset to be public.
db = DeepLake.from_documents(texts, embeddings, dataset_path="hub://davitbun/twitter-algorithm")
2. Question Answering on Twitter algorithm codebase#
First load the dataset, construct the retriever, then construct the Conversational Chain
db = DeepLake(dataset_path="hub://davitbun/twitter-algorithm", read_only=True, embedding_function=embeddings)
-
This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/twitter-algorithm
-
hub://davitbun/twitter-algorithm loaded successfully.
Deep Lake Dataset in hub://davitbun/twitter-algorithm already exists, loading from the storage
Dataset(path='hub://davitbun/twitter-algorithm', read_only=True, tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (23152, 1536) float32 None
ids text (23152, 1) str None
metadata json (23152, 1) str None
text text (23152, 1) str None | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
54757141debe-2 | text text (23152, 1) str None
retriever = db.as_retriever()
retriever.search_kwargs['distance_metric'] = 'cos'
retriever.search_kwargs['fetch_k'] = 100
retriever.search_kwargs['maximal_marginal_relevance'] = True
retriever.search_kwargs['k'] = 20
You can also specify user defined functions using Deep Lake filters
def filter(x):
# filter based on source code
if 'com.google' in x['text'].data()['value']:
return False
# filter based on path e.g. extension
metadata = x['metadata'].data()['value']
return 'scala' in metadata['source'] or 'py' in metadata['source']
### turn on below for custom filtering
# retriever.search_kwargs['filter'] = filter
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
model = ChatOpenAI(model='gpt-4') # 'gpt-3.5-turbo',
qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)
questions = [
"What does favCountParams do?",
"is it Likes + Bookmarks, or not clear from the code?",
"What are the major negative modifiers that lower your linear ranking parameters?",
"How do you get assigned to SimClusters?",
"What is needed to migrate from one SimClusters to another SimClusters?",
"How much do I get boosted within my cluster?",
"How does Heavy ranker work. what are it’s main inputs?",
"How can one influence Heavy ranker?",
"why threads and long tweets do so well on the platform?", | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
54757141debe-3 | "why threads and long tweets do so well on the platform?",
"Are thread and long tweet creators building a following that reacts to only threads?",
"Do you need to follow different strategies to get most followers vs to get most likes and bookmarks per tweet?",
"Content meta data and how it impacts virality (e.g. ALT in images).",
"What are some unexpected fingerprints for spam factors?",
"Is there any difference between company verified checkmarks and blue verified individual checkmarks?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> Question: What does favCountParams do?
Answer: favCountParams is an optional ThriftLinearFeatureRankingParams instance that represents the parameters related to the “favorite count” feature in the ranking process. It is used to control the weight of the favorite count feature while ranking tweets. The favorite count is the number of times a tweet has been marked as a favorite by users, and it is considered an important signal in the ranking of tweets. By using favCountParams, the system can adjust the importance of the favorite count while calculating the final ranking score of a tweet.
-> Question: is it Likes + Bookmarks, or not clear from the code?
Answer: From the provided code, it is not clear if the favorite count metric is determined by the sum of likes and bookmarks. The favorite count is mentioned in the code, but there is no explicit reference to how it is calculated in terms of likes and bookmarks.
-> Question: What are the major negative modifiers that lower your linear ranking parameters? | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
54757141debe-4 | -> Question: What are the major negative modifiers that lower your linear ranking parameters?
Answer: In the given code, major negative modifiers that lower the linear ranking parameters are:
scoringData.querySpecificScore: This score adjustment is based on the query-specific information. If its value is negative, it will lower the linear ranking parameters.
scoringData.authorSpecificScore: This score adjustment is based on the author-specific information. If its value is negative, it will also lower the linear ranking parameters.
Please note that I cannot provide more information on the exact calculations of these negative modifiers, as the code for their determination is not provided.
-> Question: How do you get assigned to SimClusters?
Answer: The assignment to SimClusters occurs through a Metropolis-Hastings sampling-based community detection algorithm that is run on the Producer-Producer similarity graph. This graph is created by computing the cosine similarity scores between the users who follow each producer. The algorithm identifies communities or clusters of Producers with similar followers, and takes a parameter k for specifying the number of communities to be detected.
After the community detection, different users and content are represented as sparse, interpretable vectors within these identified communities (SimClusters). The resulting SimClusters embeddings can be used for various recommendation tasks.
-> Question: What is needed to migrate from one SimClusters to another SimClusters?
Answer: To migrate from one SimClusters representation to another, you can follow these general steps:
Prepare the new representation: Create the new SimClusters representation using any necessary updates or changes in the clustering algorithm, similarity measures, or other model parameters. Ensure that this new representation is properly stored and indexed as needed.
Update the relevant code and configurations: Modify the relevant code and configuration files to reference the new SimClusters representation. This may involve updating paths or dataset names to point to the new representation, as well as changing code to use the new clustering method or similarity functions if applicable. | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
54757141debe-5 | Test the new representation: Before deploying the changes to production, thoroughly test the new SimClusters representation to ensure its effectiveness and stability. This may involve running offline jobs like candidate generation and label candidates, validating the output, as well as testing the new representation in the evaluation environment using evaluation tools like TweetSimilarityEvaluationAdhocApp.
Deploy the changes: Once the new representation has been tested and validated, deploy the changes to production. This may involve creating a zip file, uploading it to the packer, and then scheduling it with Aurora. Be sure to monitor the system to ensure a smooth transition between representations and verify that the new representation is being used in recommendations as expected.
Monitor and assess the new representation: After the new representation has been deployed, continue to monitor its performance and impact on recommendations. Take note of any improvements or issues that arise and be prepared to iterate on the new representation if needed. Always ensure that the results and performance metrics align with the system’s goals and objectives.
-> Question: How much do I get boosted within my cluster?
Answer: It’s not possible to determine the exact amount your content is boosted within your cluster in the SimClusters representation without specific data about your content and its engagement metrics. However, a combination of factors, such as the favorite score and follow score, alongside other engagement signals and SimCluster calculations, influence the boosting of content.
-> Question: How does Heavy ranker work. what are it’s main inputs?
Answer: The Heavy Ranker is a machine learning model that plays a crucial role in ranking and scoring candidates within the recommendation algorithm. Its primary purpose is to predict the likelihood of a user engaging with a tweet or connecting with another user on the platform.
Main inputs to the Heavy Ranker consist of: | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
54757141debe-6 | Main inputs to the Heavy Ranker consist of:
Static Features: These are features that can be computed directly from a tweet at the time it’s created, such as whether it has a URL, has cards, has quotes, etc. These features are produced by the Index Ingester as the tweets are generated and stored in the index.
Real-time Features: These per-tweet features can change after the tweet has been indexed. They mostly consist of social engagements like retweet count, favorite count, reply count, and some spam signals that are computed with later activities. The Signal Ingester, which is part of a Heron topology, processes multiple event streams to collect and compute these real-time features.
User Table Features: These per-user features are obtained from the User Table Updater that processes a stream written by the user service. This input is used to store sparse real-time user information, which is later propagated to the tweet being scored by looking up the author of the tweet.
Search Context Features: These features represent the context of the current searcher, like their UI language, their content consumption, and the current time (implied). They are combined with Tweet Data to compute some of the features used in scoring.
These inputs are then processed by the Heavy Ranker to score and rank candidates based on their relevance and likelihood of engagement by the user.
-> Question: How can one influence Heavy ranker?
Answer: To influence the Heavy Ranker’s output or ranking of content, consider the following actions:
Improve content quality: Create high-quality and engaging content that is relevant, informative, and valuable to users. High-quality content is more likely to receive positive user engagement, which the Heavy Ranker considers when ranking content.
Increase user engagement: Encourage users to interact with content through likes, retweets, replies, and comments. Higher engagement levels can lead to better ranking in the Heavy Ranker’s output. | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
54757141debe-7 | Optimize your user profile: A user’s reputation, based on factors such as their follower count and follower-to-following ratio, may impact the ranking of their content. Maintain a good reputation by following relevant users, keeping a reasonable follower-to-following ratio and engaging with your followers.
Enhance content discoverability: Use relevant keywords, hashtags, and mentions in your tweets, making it easier for users to find and engage with your content. This increased discoverability may help improve the ranking of your content by the Heavy Ranker.
Leverage multimedia content: Experiment with different content formats, such as videos, images, and GIFs, which may capture users’ attention and increase engagement, resulting in better ranking by the Heavy Ranker.
User feedback: Monitor and respond to feedback for your content. Positive feedback may improve your ranking, while negative feedback provides an opportunity to learn and improve.
Note that the Heavy Ranker uses a combination of machine learning models and various features to rank the content. While the above actions may help influence the ranking, there are no guarantees as the ranking process is determined by a complex algorithm, which evolves over time.
-> Question: why threads and long tweets do so well on the platform?
Answer: Threads and long tweets perform well on the platform for several reasons:
More content and context: Threads and long tweets provide more information and context about a topic, which can make the content more engaging and informative for users. People tend to appreciate a well-structured and detailed explanation of a subject or a story, and threads and long tweets can do that effectively.
Increased user engagement: As threads and long tweets provide more content, they also encourage users to engage with the tweets through replies, retweets, and likes. This increased engagement can lead to better visibility of the content, as the Twitter algorithm considers user engagement when ranking and surfacing tweets. | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
54757141debe-8 | Narrative structure: Threads enable users to tell stories or present arguments in a step-by-step manner, making the information more accessible and easier to follow. This narrative structure can capture users’ attention and encourage them to read through the entire thread and interact with the content.
Expanded reach: When users engage with a thread, their interactions can bring the content to the attention of their followers, helping to expand the reach of the thread. This increased visibility can lead to more interactions and higher performance for the threaded tweets.
Higher content quality: Generally, threads and long tweets require more thought and effort to create, which may lead to higher quality content. Users are more likely to appreciate and interact with high-quality, well-reasoned content, further improving the performance of these tweets within the platform.
Overall, threads and long tweets perform well on Twitter because they encourage user engagement and provide a richer, more informative experience that users find valuable.
-> Question: Are thread and long tweet creators building a following that reacts to only threads?
Answer: Based on the provided code and context, there isn’t enough information to conclude if the creators of threads and long tweets primarily build a following that engages with only thread-based content. The code provided is focused on Twitter’s recommendation and ranking algorithms, as well as infrastructure components like Kafka, partitions, and the Follow Recommendations Service (FRS). To answer your question, data analysis of user engagement and results of specific edge cases would be required.
-> Question: Do you need to follow different strategies to get most followers vs to get most likes and bookmarks per tweet?
Answer: Yes, different strategies need to be followed to maximize the number of followers compared to maximizing likes and bookmarks per tweet. While there may be some overlap in the approaches, they target different aspects of user engagement.
Maximizing followers: The primary focus is on growing your audience on the platform. Strategies include: | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
54757141debe-9 | Maximizing followers: The primary focus is on growing your audience on the platform. Strategies include:
Consistently sharing high-quality content related to your niche or industry.
Engaging with others on the platform by replying, retweeting, and mentioning other users.
Using relevant hashtags and participating in trending conversations.
Collaborating with influencers and other users with a large following.
Posting at optimal times when your target audience is most active.
Optimizing your profile by using a clear profile picture, catchy bio, and relevant links.
Maximizing likes and bookmarks per tweet: The focus is on creating content that resonates with your existing audience and encourages engagement. Strategies include:
Crafting engaging and well-written tweets that encourage users to like or save them.
Incorporating visually appealing elements, such as images, GIFs, or videos, that capture attention.
Asking questions, sharing opinions, or sparking conversations that encourage users to engage with your tweets.
Using analytics to understand the type of content that resonates with your audience and tailoring your tweets accordingly.
Posting a mix of educational, entertaining, and promotional content to maintain variety and interest.
Timing your tweets strategically to maximize engagement, likes, and bookmarks per tweet.
Both strategies can overlap, and you may need to adapt your approach by understanding your target audience’s preferences and analyzing your account’s performance. However, it’s essential to recognize that maximizing followers and maximizing likes and bookmarks per tweet have different focuses and require specific strategies.
-> Question: Content meta data and how it impacts virality (e.g. ALT in images). | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
54757141debe-10 | -> Question: Content meta data and how it impacts virality (e.g. ALT in images).
Answer: There is no direct information in the provided context about how content metadata, such as ALT text in images, impacts the virality of a tweet or post. However, it’s worth noting that including ALT text can improve the accessibility of your content for users who rely on screen readers, which may lead to increased engagement for a broader audience. Additionally, metadata can be used in search engine optimization, which might improve the visibility of the content, but the context provided does not mention any specific correlation with virality.
-> Question: What are some unexpected fingerprints for spam factors?
Answer: In the provided context, an unusual indicator of spam factors is when a tweet contains a non-media, non-news link. If the tweet has a link but does not have an image URL, video URL, or news URL, it is considered a potential spam vector, and a threshold for user reputation (tweepCredThreshold) is set to MIN_TWEEPCRED_WITH_LINK.
While this rule may not cover all possible unusual spam indicators, it is derived from the specific codebase and logic shared in the context.
-> Question: Is there any difference between company verified checkmarks and blue verified individual checkmarks?
Answer: Yes, there is a distinction between the verified checkmarks for companies and blue verified checkmarks for individuals. The code snippet provided mentions “Blue-verified account boost” which indicates that there is a separate category for blue verified accounts. Typically, blue verified checkmarks are used to indicate notable individuals, while verified checkmarks are for companies or organizations.
Contents
1. Index the code base (optional)
2. Question Answering on Twitter algorithm codebase
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/code/twitter-the-algorithm-analysis-deeplake.html |
02e5ef73adf4-0 | .ipynb
.pdf
Use LangChain, GPT and Deep Lake to work with code base
Contents
Design
Implementation
Integration preparations
Prepare data
Question Answering
Use LangChain, GPT and Deep Lake to work with code base#
In this tutorial, we are going to use Langchain + Deep Lake with GPT to analyze the code base of the LangChain itself.
Design#
Prepare data:
Upload all python project files using the langchain.document_loaders.TextLoader. We will call these files the documents.
Split all documents to chunks using the langchain.text_splitter.CharacterTextSplitter.
Embed chunks and upload them into the DeepLake using langchain.embeddings.openai.OpenAIEmbeddings and langchain.vectorstores.DeepLake
Question-Answering:
Build a chain from langchain.chat_models.ChatOpenAI and langchain.chains.ConversationalRetrievalChain
Prepare questions.
Get answers running the chain.
Implementation#
Integration preparations#
We need to set up keys for external services and install necessary python libraries.
#!python3 -m pip install --upgrade langchain deeplake openai
Set up OpenAI embeddings, Deep Lake multi-modal vector store api and authenticate.
For full documentation of Deep Lake please follow https://docs.activeloop.ai/ and API reference https://docs.deeplake.ai/en/latest/
import os
from getpass import getpass
os.environ['OPENAI_API_KEY'] = getpass()
# Please manually enter OpenAI Key
········
Authenticate into Deep Lake if you want to create your own dataset and publish it. You can get an API key from the platform at app.activeloop.ai
os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:')
········
Prepare data# | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
02e5ef73adf4-1 | ········
Prepare data#
Load all repository files. Here we assume this notebook is downloaded as the part of the langchain fork and we work with the python files of the langchain repo.
If you want to use files from different repo, change root_dir to the root dir of your repo.
from langchain.document_loaders import TextLoader
root_dir = '../../../..'
docs = []
for dirpath, dirnames, filenames in os.walk(root_dir):
for file in filenames:
if file.endswith('.py') and '/.venv/' not in dirpath:
try:
loader = TextLoader(os.path.join(dirpath, file), encoding='utf-8')
docs.extend(loader.load_and_split())
except Exception as e:
pass
print(f'{len(docs)}')
1147
Then, chunk the files
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
print(f"{len(texts)}")
Created a chunk of size 1620, which is longer than the specified 1000
Created a chunk of size 1213, which is longer than the specified 1000
Created a chunk of size 1263, which is longer than the specified 1000
Created a chunk of size 1448, which is longer than the specified 1000
Created a chunk of size 1120, which is longer than the specified 1000
Created a chunk of size 1148, which is longer than the specified 1000
Created a chunk of size 1826, which is longer than the specified 1000
Created a chunk of size 1260, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
02e5ef73adf4-2 | Created a chunk of size 1260, which is longer than the specified 1000
Created a chunk of size 1195, which is longer than the specified 1000
Created a chunk of size 2147, which is longer than the specified 1000
Created a chunk of size 1410, which is longer than the specified 1000
Created a chunk of size 1269, which is longer than the specified 1000
Created a chunk of size 1030, which is longer than the specified 1000
Created a chunk of size 1046, which is longer than the specified 1000
Created a chunk of size 1024, which is longer than the specified 1000
Created a chunk of size 1026, which is longer than the specified 1000
Created a chunk of size 1285, which is longer than the specified 1000
Created a chunk of size 1370, which is longer than the specified 1000
Created a chunk of size 1031, which is longer than the specified 1000
Created a chunk of size 1999, which is longer than the specified 1000
Created a chunk of size 1029, which is longer than the specified 1000
Created a chunk of size 1120, which is longer than the specified 1000
Created a chunk of size 1033, which is longer than the specified 1000
Created a chunk of size 1143, which is longer than the specified 1000
Created a chunk of size 1416, which is longer than the specified 1000
Created a chunk of size 2482, which is longer than the specified 1000
Created a chunk of size 1890, which is longer than the specified 1000
Created a chunk of size 1418, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
02e5ef73adf4-3 | Created a chunk of size 1418, which is longer than the specified 1000
Created a chunk of size 1848, which is longer than the specified 1000
Created a chunk of size 1069, which is longer than the specified 1000
Created a chunk of size 2369, which is longer than the specified 1000
Created a chunk of size 1045, which is longer than the specified 1000
Created a chunk of size 1501, which is longer than the specified 1000
Created a chunk of size 1208, which is longer than the specified 1000
Created a chunk of size 1950, which is longer than the specified 1000
Created a chunk of size 1283, which is longer than the specified 1000
Created a chunk of size 1414, which is longer than the specified 1000
Created a chunk of size 1304, which is longer than the specified 1000
Created a chunk of size 1224, which is longer than the specified 1000
Created a chunk of size 1060, which is longer than the specified 1000
Created a chunk of size 2461, which is longer than the specified 1000
Created a chunk of size 1099, which is longer than the specified 1000
Created a chunk of size 1178, which is longer than the specified 1000
Created a chunk of size 1449, which is longer than the specified 1000
Created a chunk of size 1345, which is longer than the specified 1000
Created a chunk of size 3359, which is longer than the specified 1000
Created a chunk of size 2248, which is longer than the specified 1000
Created a chunk of size 1589, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
02e5ef73adf4-4 | Created a chunk of size 1589, which is longer than the specified 1000
Created a chunk of size 2104, which is longer than the specified 1000
Created a chunk of size 1505, which is longer than the specified 1000
Created a chunk of size 1387, which is longer than the specified 1000
Created a chunk of size 1215, which is longer than the specified 1000
Created a chunk of size 1240, which is longer than the specified 1000
Created a chunk of size 1635, which is longer than the specified 1000
Created a chunk of size 1075, which is longer than the specified 1000
Created a chunk of size 2180, which is longer than the specified 1000
Created a chunk of size 1791, which is longer than the specified 1000
Created a chunk of size 1555, which is longer than the specified 1000
Created a chunk of size 1082, which is longer than the specified 1000
Created a chunk of size 1225, which is longer than the specified 1000
Created a chunk of size 1287, which is longer than the specified 1000
Created a chunk of size 1085, which is longer than the specified 1000
Created a chunk of size 1117, which is longer than the specified 1000
Created a chunk of size 1966, which is longer than the specified 1000
Created a chunk of size 1150, which is longer than the specified 1000
Created a chunk of size 1285, which is longer than the specified 1000
Created a chunk of size 1150, which is longer than the specified 1000
Created a chunk of size 1585, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
02e5ef73adf4-5 | Created a chunk of size 1585, which is longer than the specified 1000
Created a chunk of size 1208, which is longer than the specified 1000
Created a chunk of size 1267, which is longer than the specified 1000
Created a chunk of size 1542, which is longer than the specified 1000
Created a chunk of size 1183, which is longer than the specified 1000
Created a chunk of size 2424, which is longer than the specified 1000
Created a chunk of size 1017, which is longer than the specified 1000
Created a chunk of size 1304, which is longer than the specified 1000
Created a chunk of size 1379, which is longer than the specified 1000
Created a chunk of size 1324, which is longer than the specified 1000
Created a chunk of size 1205, which is longer than the specified 1000
Created a chunk of size 1056, which is longer than the specified 1000
Created a chunk of size 1195, which is longer than the specified 1000
Created a chunk of size 3608, which is longer than the specified 1000
Created a chunk of size 1058, which is longer than the specified 1000
Created a chunk of size 1075, which is longer than the specified 1000
Created a chunk of size 1217, which is longer than the specified 1000
Created a chunk of size 1109, which is longer than the specified 1000
Created a chunk of size 1440, which is longer than the specified 1000
Created a chunk of size 1046, which is longer than the specified 1000
Created a chunk of size 1220, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
02e5ef73adf4-6 | Created a chunk of size 1220, which is longer than the specified 1000
Created a chunk of size 1403, which is longer than the specified 1000
Created a chunk of size 1241, which is longer than the specified 1000
Created a chunk of size 1427, which is longer than the specified 1000
Created a chunk of size 1049, which is longer than the specified 1000
Created a chunk of size 1580, which is longer than the specified 1000
Created a chunk of size 1565, which is longer than the specified 1000
Created a chunk of size 1131, which is longer than the specified 1000
Created a chunk of size 1425, which is longer than the specified 1000
Created a chunk of size 1054, which is longer than the specified 1000
Created a chunk of size 1027, which is longer than the specified 1000
Created a chunk of size 2559, which is longer than the specified 1000
Created a chunk of size 1028, which is longer than the specified 1000
Created a chunk of size 1382, which is longer than the specified 1000
Created a chunk of size 1888, which is longer than the specified 1000
Created a chunk of size 1475, which is longer than the specified 1000
Created a chunk of size 1652, which is longer than the specified 1000
Created a chunk of size 1891, which is longer than the specified 1000
Created a chunk of size 1899, which is longer than the specified 1000
Created a chunk of size 1021, which is longer than the specified 1000
Created a chunk of size 1085, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
02e5ef73adf4-7 | Created a chunk of size 1085, which is longer than the specified 1000
Created a chunk of size 1854, which is longer than the specified 1000
Created a chunk of size 1672, which is longer than the specified 1000
Created a chunk of size 2537, which is longer than the specified 1000
Created a chunk of size 1251, which is longer than the specified 1000
Created a chunk of size 1734, which is longer than the specified 1000
Created a chunk of size 1642, which is longer than the specified 1000
Created a chunk of size 1376, which is longer than the specified 1000
Created a chunk of size 1253, which is longer than the specified 1000
Created a chunk of size 1642, which is longer than the specified 1000
Created a chunk of size 1419, which is longer than the specified 1000
Created a chunk of size 1438, which is longer than the specified 1000
Created a chunk of size 1427, which is longer than the specified 1000
Created a chunk of size 1684, which is longer than the specified 1000
Created a chunk of size 1760, which is longer than the specified 1000
Created a chunk of size 1157, which is longer than the specified 1000
Created a chunk of size 2504, which is longer than the specified 1000
Created a chunk of size 1082, which is longer than the specified 1000
Created a chunk of size 2268, which is longer than the specified 1000
Created a chunk of size 1784, which is longer than the specified 1000
Created a chunk of size 1311, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
02e5ef73adf4-8 | Created a chunk of size 1311, which is longer than the specified 1000
Created a chunk of size 2972, which is longer than the specified 1000
Created a chunk of size 1144, which is longer than the specified 1000
Created a chunk of size 1825, which is longer than the specified 1000
Created a chunk of size 1508, which is longer than the specified 1000
Created a chunk of size 2901, which is longer than the specified 1000
Created a chunk of size 1715, which is longer than the specified 1000
Created a chunk of size 1062, which is longer than the specified 1000
Created a chunk of size 1206, which is longer than the specified 1000
Created a chunk of size 1102, which is longer than the specified 1000
Created a chunk of size 1184, which is longer than the specified 1000
Created a chunk of size 1002, which is longer than the specified 1000
Created a chunk of size 1065, which is longer than the specified 1000
Created a chunk of size 1871, which is longer than the specified 1000
Created a chunk of size 1754, which is longer than the specified 1000
Created a chunk of size 2413, which is longer than the specified 1000
Created a chunk of size 1771, which is longer than the specified 1000
Created a chunk of size 2054, which is longer than the specified 1000
Created a chunk of size 2000, which is longer than the specified 1000
Created a chunk of size 2061, which is longer than the specified 1000
Created a chunk of size 1066, which is longer than the specified 1000 | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
02e5ef73adf4-9 | Created a chunk of size 1066, which is longer than the specified 1000
Created a chunk of size 1419, which is longer than the specified 1000
Created a chunk of size 1368, which is longer than the specified 1000
Created a chunk of size 1008, which is longer than the specified 1000
Created a chunk of size 1227, which is longer than the specified 1000
Created a chunk of size 1745, which is longer than the specified 1000
Created a chunk of size 2296, which is longer than the specified 1000
Created a chunk of size 1083, which is longer than the specified 1000
3477
Then embed chunks and upload them to the DeepLake.
This can take several minutes.
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
embeddings
OpenAIEmbeddings(client=<class 'openai.api_resources.embedding.Embedding'>, model='text-embedding-ada-002', document_model_name='text-embedding-ada-002', query_model_name='text-embedding-ada-002', embedding_ctx_length=8191, openai_api_key=None, openai_organization=None, allowed_special=set(), disallowed_special='all', chunk_size=1000, max_retries=6)
from langchain.vectorstores import DeepLake
db = DeepLake.from_documents(texts, embeddings, dataset_path=f"hub://{DEEPLAKE_ACCOUNT_NAME}/langchain-code")
db
Question Answering#
First load the dataset, construct the retriever, then construct the Conversational Chain
db = DeepLake(dataset_path=f"hub://{DEEPLAKE_ACCOUNT_NAME}/langchain-code", read_only=True, embedding_function=embeddings)
- | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
02e5ef73adf4-10 | -
This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/user_name/langchain-code
/
hub://user_name/langchain-code loaded successfully.
Deep Lake Dataset in hub://user_name/langchain-code already exists, loading from the storage
Dataset(path='hub://user_name/langchain-code', read_only=True, tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (3477, 1536) float32 None
ids text (3477, 1) str None
metadata json (3477, 1) str None
text text (3477, 1) str None
retriever = db.as_retriever()
retriever.search_kwargs['distance_metric'] = 'cos'
retriever.search_kwargs['fetch_k'] = 20
retriever.search_kwargs['maximal_marginal_relevance'] = True
retriever.search_kwargs['k'] = 20
You can also specify user defined functions using Deep Lake filters
def filter(x):
# filter based on source code
if 'something' in x['text'].data()['value']:
return False
# filter based on path e.g. extension
metadata = x['metadata'].data()['value']
return 'only_this' in metadata['source'] or 'also_that' in metadata['source']
### turn on below for custom filtering
# retriever.search_kwargs['filter'] = filter
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
02e5ef73adf4-11 | from langchain.chains import ConversationalRetrievalChain
model = ChatOpenAI(model='gpt-3.5-turbo') # 'ada' 'gpt-3.5-turbo' 'gpt-4',
qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)
questions = [
"What is the class hierarchy?",
# "What classes are derived from the Chain class?",
# "What classes and functions in the ./langchain/utilities/ forlder are not covered by unit tests?",
# "What one improvement do you propose in code in relation to the class herarchy for the Chain class?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> Question: What is the class hierarchy?
Answer: There are several class hierarchies in the provided code, so I’ll list a few:
BaseModel -> ConstitutionalPrinciple: ConstitutionalPrinciple is a subclass of BaseModel.
BasePromptTemplate -> StringPromptTemplate, AIMessagePromptTemplate, BaseChatPromptTemplate, ChatMessagePromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate, FewShotPromptTemplate, FewShotPromptWithTemplates, Prompt, PromptTemplate: All of these classes are subclasses of BasePromptTemplate. | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
02e5ef73adf4-12 | APIChain, Chain, MapReduceDocumentsChain, MapRerankDocumentsChain, RefineDocumentsChain, StuffDocumentsChain, HypotheticalDocumentEmbedder, LLMChain, LLMBashChain, LLMCheckerChain, LLMMathChain, LLMRequestsChain, PALChain, QAWithSourcesChain, VectorDBQAWithSourcesChain, VectorDBQA, SQLDatabaseChain: All of these classes are subclasses of Chain.
BaseLoader: BaseLoader is a subclass of ABC.
BaseTracer -> ChainRun, LLMRun, SharedTracer, ToolRun, Tracer, TracerException, TracerSession: All of these classes are subclasses of BaseTracer.
OpenAIEmbeddings, HuggingFaceEmbeddings, CohereEmbeddings, JinaEmbeddings, LlamaCppEmbeddings, HuggingFaceHubEmbeddings, TensorflowHubEmbeddings, SagemakerEndpointEmbeddings, HuggingFaceInstructEmbeddings, SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, SelfHostedHuggingFaceInstructEmbeddings, FakeEmbeddings, AlephAlphaAsymmetricSemanticEmbedding, AlephAlphaSymmetricSemanticEmbedding: All of these classes are subclasses of BaseLLM.
-> Question: What classes are derived from the Chain class?
Answer: There are multiple classes that are derived from the Chain class. Some of them are:
APIChain
AnalyzeDocumentChain
ChatVectorDBChain
CombineDocumentsChain
ConstitutionalChain
ConversationChain
GraphQAChain
HypotheticalDocumentEmbedder
LLMChain
LLMCheckerChain
LLMRequestsChain
LLMSummarizationCheckerChain
MapReduceChain
OpenAPIEndpointChain
PALChain
QAWithSourcesChain
RetrievalQA
RetrievalQAWithSourcesChain
SequentialChain
SQLDatabaseChain
TransformChain
VectorDBQA
VectorDBQAWithSourcesChain | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
02e5ef73adf4-13 | SequentialChain
SQLDatabaseChain
TransformChain
VectorDBQA
VectorDBQAWithSourcesChain
There might be more classes that are derived from the Chain class as it is possible to create custom classes that extend the Chain class.
-> Question: What classes and functions in the ./langchain/utilities/ forlder are not covered by unit tests?
Answer: All classes and functions in the ./langchain/utilities/ folder seem to have unit tests written for them.
Contents
Design
Implementation
Integration preparations
Prepare data
Question Answering
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html |
6083f82d91e3-0 | .ipynb
.pdf
Question Answering
Contents
Setup
Examples
Predictions
Evaluation
Customize Prompt
Evaluation without Ground Truth
Comparing to other evaluation metrics
Question Answering#
This notebook covers how to evaluate generic question answering problems. This is a situation where you have an example containing a question and its corresponding ground truth answer, and you want to measure how well the language model does at answering those questions.
Setup#
For demonstration purposes, we will just evaluate a simple question answering system that only evaluates the model’s internal knowledge. Please see other notebooks for examples where it evaluates how the model does at question answering over data not present in what the model was trained on.
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.llms import OpenAI
prompt = PromptTemplate(template="Question: {question}\nAnswer:", input_variables=["question"])
llm = OpenAI(model_name="text-davinci-003", temperature=0)
chain = LLMChain(llm=llm, prompt=prompt)
Examples#
For this purpose, we will just use two simple hardcoded examples, but see other notebooks for tips on how to get and/or generate these examples.
examples = [
{
"question": "Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?",
"answer": "11"
},
{
"question": 'Is the following sentence plausible? "Joao Moutinho caught the screen pass in the NFC championship."',
"answer": "No"
}
]
Predictions#
We can now make and inspect the predictions for these questions.
predictions = chain.apply(examples)
predictions
[{'text': ' 11 tennis balls'}, | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
6083f82d91e3-1 | predictions = chain.apply(examples)
predictions
[{'text': ' 11 tennis balls'},
{'text': ' No, this sentence is not plausible. Joao Moutinho is a professional soccer player, not an American football player, so it is not likely that he would be catching a screen pass in the NFC championship.'}]
Evaluation#
We can see that if we tried to just do exact match on the answer answers (11 and No) they would not match what the language model answered. However, semantically the language model is correct in both cases. In order to account for this, we can use a language model itself to evaluate the answers.
from langchain.evaluation.qa import QAEvalChain
llm = OpenAI(temperature=0)
eval_chain = QAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(examples, predictions, question_key="question", prediction_key="text")
for i, eg in enumerate(examples):
print(f"Example {i}:")
print("Question: " + eg['question'])
print("Real Answer: " + eg['answer'])
print("Predicted Answer: " + predictions[i]['text'])
print("Predicted Grade: " + graded_outputs[i]['text'])
print()
Example 0:
Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
Real Answer: 11
Predicted Answer: 11 tennis balls
Predicted Grade: CORRECT
Example 1:
Question: Is the following sentence plausible? "Joao Moutinho caught the screen pass in the NFC championship."
Real Answer: No | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
6083f82d91e3-2 | Real Answer: No
Predicted Answer: No, this sentence is not plausible. Joao Moutinho is a professional soccer player, not an American football player, so it is not likely that he would be catching a screen pass in the NFC championship.
Predicted Grade: CORRECT
Customize Prompt#
You can also customize the prompt that is used. Here is an example prompting it using a score from 0 to 10.
The custom prompt requires 3 input variables: “query”, “answer” and “result”. Where “query” is the question, “answer” is the ground truth answer, and “result” is the predicted answer.
from langchain.prompts.prompt import PromptTemplate
_PROMPT_TEMPLATE = """You are an expert professor specialized in grading students' answers to questions.
You are grading the following question:
{query}
Here is the real answer:
{answer}
You are grading the following predicted answer:
{result}
What grade do you give from 0 to 10, where 0 is the lowest (very low similarity) and 10 is the highest (very high similarity)?
"""
PROMPT = PromptTemplate(input_variables=["query", "answer", "result"], template=_PROMPT_TEMPLATE)
evalchain = QAEvalChain.from_llm(llm=llm,prompt=PROMPT)
evalchain.evaluate(examples, predictions, question_key="question", answer_key="answer", prediction_key="text")
Evaluation without Ground Truth#
Its possible to evaluate question answering systems without ground truth. You would need a "context" input that reflects what the information the LLM uses to answer the question. This context can be obtained by any retreival system. Here’s an example of how it works:
context_examples = [
{
"question": "How old am I?", | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
6083f82d91e3-3 | context_examples = [
{
"question": "How old am I?",
"context": "I am 30 years old. I live in New York and take the train to work everyday.",
},
{
"question": 'Who won the NFC championship game in 2023?"',
"context": "NFC Championship Game 2023: Philadelphia Eagles 31, San Francisco 49ers 7"
}
]
QA_PROMPT = "Answer the question based on the context\nContext:{context}\nQuestion:{question}\nAnswer:"
template = PromptTemplate(input_variables=["context", "question"], template=QA_PROMPT)
qa_chain = LLMChain(llm=llm, prompt=template)
predictions = qa_chain.apply(context_examples)
predictions
[{'text': 'You are 30 years old.'},
{'text': ' The Philadelphia Eagles won the NFC championship game in 2023.'}]
from langchain.evaluation.qa import ContextQAEvalChain
eval_chain = ContextQAEvalChain.from_llm(llm)
graded_outputs = eval_chain.evaluate(context_examples, predictions, question_key="question", prediction_key="text")
graded_outputs
[{'text': ' CORRECT'}, {'text': ' CORRECT'}]
Comparing to other evaluation metrics#
We can compare the evaluation results we get to other common evaluation metrics. To do this, let’s load some evaluation metrics from HuggingFace’s evaluate package.
# Some data munging to get the examples in the right format
for i, eg in enumerate(examples):
eg['id'] = str(i)
eg['answers'] = {"text": [eg['answer']], "answer_start": [0]}
predictions[i]['id'] = str(i) | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
6083f82d91e3-4 | predictions[i]['id'] = str(i)
predictions[i]['prediction_text'] = predictions[i]['text']
for p in predictions:
del p['text']
new_examples = examples.copy()
for eg in new_examples:
del eg ['question']
del eg['answer']
from evaluate import load
squad_metric = load("squad")
results = squad_metric.compute(
references=new_examples,
predictions=predictions,
)
results
{'exact_match': 0.0, 'f1': 28.125}
previous
QA Generation
next
SQL Question Answering Benchmarking: Chinook
Contents
Setup
Examples
Predictions
Evaluation
Customize Prompt
Evaluation without Ground Truth
Comparing to other evaluation metrics
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Apr 21, 2023. | https://python.langchain.com/en/latest/use_cases/evaluation/question_answering.html |
cf359ee55f68-0 | .ipynb
.pdf
Evaluating an OpenAPI Chain
Contents
Load the API Chain
Optional: Generate Input Questions and Request Ground Truth Queries
Run the API Chain
Evaluate the requests chain
Evaluate the Response Chain
Generating Test Datasets
Evaluating an OpenAPI Chain#
This notebook goes over ways to semantically evaluate an OpenAPI Chain, which calls an endpoint defined by the OpenAPI specification using purely natural language.
from langchain.tools import OpenAPISpec, APIOperation
from langchain.chains import OpenAPIEndpointChain, LLMChain
from langchain.requests import Requests
from langchain.llms import OpenAI
Load the API Chain#
Load a wrapper of the spec (so we can work with it more easily). You can load from a url or from a local file.
# Load and parse the OpenAPI Spec
spec = OpenAPISpec.from_url("https://www.klarna.com/us/shopping/public/openai/v0/api-docs/")
# Load a single endpoint operation
operation = APIOperation.from_openapi_spec(spec, '/public/openai/v0/products', "get")
verbose = False
# Select any LangChain LLM
llm = OpenAI(temperature=0, max_tokens=1000)
# Create the endpoint chain
api_chain = OpenAPIEndpointChain.from_api_operation(
operation,
llm,
requests=Requests(),
verbose=verbose,
return_intermediate_steps=True # Return request and response text
)
Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Optional: Generate Input Questions and Request Ground Truth Queries#
See Generating Test Datasets at the end of this notebook for more details.
# import re | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
cf359ee55f68-1 | See Generating Test Datasets at the end of this notebook for more details.
# import re
# from langchain.prompts import PromptTemplate
# template = """Below is a service description:
# {spec}
# Imagine you're a new user trying to use {operation} through a search bar. What are 10 different things you want to request?
# Wants/Questions:
# 1. """
# prompt = PromptTemplate.from_template(template)
# generation_chain = LLMChain(llm=llm, prompt=prompt)
# questions_ = generation_chain.run(spec=operation.to_typescript(), operation=operation.operation_id).split('\n')
# # Strip preceding numeric bullets
# questions = [re.sub(r'^\d+\. ', '', q).strip() for q in questions_]
# questions
# ground_truths = [
# {"q": ...} # What are the best queries for each input?
# ]
Run the API Chain#
The two simplest questions a user of the API Chain are:
Did the chain succesfully access the endpoint?
Did the action accomplish the correct result?
from collections import defaultdict
# Collect metrics to report at completion
scores = defaultdict(list)
from langchain.evaluation.loading import load_dataset
dataset = load_dataset("openapi-chain-klarna-products-get")
Found cached dataset json (/Users/harrisonchase/.cache/huggingface/datasets/LangChainDatasets___json/LangChainDatasets--openapi-chain-klarna-products-get-5d03362007667626/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51)
dataset
[{'question': 'What iPhone models are available?', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
cf359ee55f68-2 | dataset
[{'question': 'What iPhone models are available?',
'expected_query': {'max_price': None, 'q': 'iPhone'}},
{'question': 'Are there any budget laptops?',
'expected_query': {'max_price': 300, 'q': 'laptop'}},
{'question': 'Show me the cheapest gaming PC.',
'expected_query': {'max_price': 500, 'q': 'gaming pc'}},
{'question': 'Are there any tablets under $400?',
'expected_query': {'max_price': 400, 'q': 'tablet'}},
{'question': 'What are the best headphones?',
'expected_query': {'max_price': None, 'q': 'headphones'}},
{'question': 'What are the top rated laptops?',
'expected_query': {'max_price': None, 'q': 'laptop'}},
{'question': 'I want to buy some shoes. I like Adidas and Nike.',
'expected_query': {'max_price': None, 'q': 'shoe'}},
{'question': 'I want to buy a new skirt',
'expected_query': {'max_price': None, 'q': 'skirt'}},
{'question': 'My company is asking me to get a professional Deskopt PC - money is no object.',
'expected_query': {'max_price': 10000, 'q': 'professional desktop PC'}},
{'question': 'What are the best budget cameras?',
'expected_query': {'max_price': 300, 'q': 'camera'}}]
questions = [d['question'] for d in dataset]
## Run the the API chain itself
raise_error = False # Stop on first failed example - useful for development
chain_outputs = []
failed_examples = []
for question in questions:
try: | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
cf359ee55f68-3 | chain_outputs = []
failed_examples = []
for question in questions:
try:
chain_outputs.append(api_chain(question))
scores["completed"].append(1.0)
except Exception as e:
if raise_error:
raise e
failed_examples.append({'q': question, 'error': e})
scores["completed"].append(0.0)
# If the chain failed to run, show the failing examples
failed_examples
[]
answers = [res['output'] for res in chain_outputs]
answers
['There are currently 10 Apple iPhone models available: Apple iPhone 14 Pro Max 256GB, Apple iPhone 12 128GB, Apple iPhone 13 128GB, Apple iPhone 14 Pro 128GB, Apple iPhone 14 Pro 256GB, Apple iPhone 14 Pro Max 128GB, Apple iPhone 13 Pro Max 128GB, Apple iPhone 14 128GB, Apple iPhone 12 Pro 512GB, and Apple iPhone 12 mini 64GB.',
'Yes, there are several budget laptops in the API response. For example, the HP 14-dq0055dx and HP 15-dw0083wm are both priced at $199.99 and $244.99 respectively.',
'The cheapest gaming PC available is the Alarco Gaming PC (X_BLACK_GTX750) for $499.99. You can find more information about it here: https://www.klarna.com/us/shopping/pl/cl223/3203154750/Desktop-Computers/Alarco-Gaming-PC-%28X_BLACK_GTX750%29/?utm_source=openai&ref-site=openai_plugin', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
cf359ee55f68-4 | 'Yes, there are several tablets under $400. These include the Apple iPad 10.2" 32GB (2019), Samsung Galaxy Tab A8 10.5 SM-X200 32GB, Samsung Galaxy Tab A7 Lite 8.7 SM-T220 32GB, Amazon Fire HD 8" 32GB (10th Generation), and Amazon Fire HD 10 32GB.',
'It looks like you are looking for the best headphones. Based on the API response, it looks like the Apple AirPods Pro (2nd generation) 2022, Apple AirPods Max, and Bose Noise Cancelling Headphones 700 are the best options.',
'The top rated laptops based on the API response are the Apple MacBook Pro (2021) M1 Pro 8C CPU 14C GPU 16GB 512GB SSD 14", Apple MacBook Pro (2022) M2 OC 10C GPU 8GB 256GB SSD 13.3", Apple MacBook Air (2022) M2 OC 8C GPU 8GB 256GB SSD 13.6", and Apple MacBook Pro (2023) M2 Pro OC 16C GPU 16GB 512GB SSD 14.2".', | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
cf359ee55f68-5 | "I found several Nike and Adidas shoes in the API response. Here are the links to the products: Nike Dunk Low M - Black/White: https://www.klarna.com/us/shopping/pl/cl337/3200177969/Shoes/Nike-Dunk-Low-M-Black-White/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 4 Retro M - Midnight Navy: https://www.klarna.com/us/shopping/pl/cl337/3202929835/Shoes/Nike-Air-Jordan-4-Retro-M-Midnight-Navy/?utm_source=openai&ref-site=openai_plugin, Nike Air Force 1 '07 M - White: https://www.klarna.com/us/shopping/pl/cl337/3979297/Shoes/Nike-Air-Force-1-07-M-White/?utm_source=openai&ref-site=openai_plugin, Nike Dunk Low W - White/Black: https://www.klarna.com/us/shopping/pl/cl337/3200134705/Shoes/Nike-Dunk-Low-W-White-Black/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 1 Retro High M - White/University Blue/Black: https://www.klarna.com/us/shopping/pl/cl337/3200383658/Shoes/Nike-Air-Jordan-1-Retro-High-M-White-University-Blue-Black/?utm_source=openai&ref-site=openai_plugin, Nike Air Jordan 1 Retro High OG M - True Blue/Cement | https://python.langchain.com/en/latest/use_cases/evaluation/openapi_eval.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.