id
stringlengths
14
16
text
stringlengths
29
2.73k
source
stringlengths
49
115
b7324fa37a68-2
docs_and_scores = db.similarity_search_with_score(query) docs_and_scores[0] (Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen. \n\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), 0.3914415) It is also possible to do a search for documents similar to a given embedding vector using similarity_search_by_vector which accepts an embedding vector as a parameter instead of a string. embedding_vector = embeddings.embed_query(query) docs_and_scores = db.similarity_search_by_vector(embedding_vector) Saving and loading# You can also save and load a FAISS index. This is useful so you don’t have to recreate it everytime you use it. db.save_local("faiss_index") new_db = FAISS.load_local("faiss_index", embeddings) docs = new_db.similarity_search(query) docs[0]
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html
b7324fa37a68-3
docs = new_db.similarity_search(query) docs[0] Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen. \n\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0) Merging# You can also merge two FAISS vectorstores db1 = FAISS.from_texts(["foo"], embeddings) db2 = FAISS.from_texts(["bar"], embeddings) db1.docstore._dict {'e0b74348-6c93-4893-8764-943139ec1d17': Document(page_content='foo', lookup_str='', metadata={}, lookup_index=0)} db2.docstore._dict {'bdc50ae3-a1bb-4678-9260-1b0979578f40': Document(page_content='bar', lookup_str='', metadata={}, lookup_index=0)} db1.merge_from(db2) db1.docstore._dict
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html
b7324fa37a68-4
db1.merge_from(db2) db1.docstore._dict {'e0b74348-6c93-4893-8764-943139ec1d17': Document(page_content='foo', lookup_str='', metadata={}, lookup_index=0), 'd5211050-c777-493d-8825-4800e74cfdb6': Document(page_content='bar', lookup_str='', metadata={}, lookup_index=0)} previous ElasticSearch next LanceDB Contents Similarity Search with score Saving and loading Merging By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/faiss.html
210280daacf2-0
.ipynb .pdf Zilliz Zilliz# Zilliz Cloud is a fully managed service on cloud for LF AI Milvus®, This notebook shows how to use functionality related to the Zilliz Cloud managed vector database. To run, you should have a Zilliz Cloud instance up and running. Here are the installation instructions !pip install pymilvus We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import os import getpass os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') # replace ZILLIZ_CLOUD_URI = "" # example: "https://in01-17f69c292d4a5sa.aws-us-west-2.vectordb.zillizcloud.com:19536" ZILLIZ_CLOUD_USERNAME = "" # example: "username" ZILLIZ_CLOUD_PASSWORD = "" # example: "*********" from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Milvus from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vector_db = Milvus.from_documents( docs, embeddings, connection_args={ "uri": ZILLIZ_CLOUD_URI, "username": ZILLIZ_CLOUD_USERNAME, "password": ZILLIZ_CLOUD_PASSWORD, "secure": True } )
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/zilliz.html
210280daacf2-1
"secure": True } ) docs = vector_db.similarity_search(query) docs[0] previous Weaviate next Retrievers By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/zilliz.html
348008827e64-0
.ipynb .pdf MyScale Contents Setting up envrionments Get connection info and data schema Filtering Deleting your data MyScale# MyScale is a cloud-based database optimized for AI applications and solutions, built on the open-source ClickHouse. This notebook shows how to use functionality related to the MyScale vector database. Setting up envrionments# !pip install clickhouse-connect We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import os import getpass os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') There are two ways to set up parameters for myscale index. Environment Variables Before you run the app, please set the environment variable with export: export MYSCALE_URL='<your-endpoints-url>' MYSCALE_PORT=<your-endpoints-port> MYSCALE_USERNAME=<your-username> MYSCALE_PASSWORD=<your-password> ... You can easily find your account, password and other info on our SaaS. For details please refer to this document Every attributes under MyScaleSettings can be set with prefix MYSCALE_ and is case insensitive. Create MyScaleSettings object with parameters from langchain.vectorstores import MyScale, MyScaleSettings config = MyScaleSetting(host="<your-backend-url>", port=8443, ...) index = MyScale(embedding_function, config) index.add_documents(...) from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import MyScale from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html
348008827e64-1
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() for d in docs: d.metadata = {'some': 'metadata'} docsearch = MyScale.from_documents(docs, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) Inserting data...: 100%|██████████| 42/42 [00:18<00:00, 2.21it/s] print(docs[0].page_content) As Frances Haugen, who is here with us tonight, has shown, we must hold social media platforms accountable for the national experiment they’re conducting on our children for profit. It’s time to strengthen privacy protections, ban targeted advertising to children, demand tech companies stop collecting personal data on our children. And let’s get all Americans the mental health services they need. More people they can turn to for help, and full parity between physical and mental health care. Third, support our veterans. Veterans are the best of us. I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home. My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. Our troops in Iraq and Afghanistan faced many dangers. Get connection info and data schema# print(str(docsearch)) Filtering# You can have direct access to myscale SQL where statement. You can write WHERE clause following standard SQL. NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html
348008827e64-2
NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user. If you custimized your column_map under your setting, you search with filter like this: from langchain.vectorstores import MyScale, MyScaleSettings from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() for i, d in enumerate(docs): d.metadata = {'doc_id': i} docsearch = MyScale.from_documents(docs, embeddings) Inserting data...: 100%|██████████| 42/42 [00:15<00:00, 2.69it/s] meta = docsearch.metadata_column output = docsearch.similarity_search_with_relevance_scores('What did the president say about Ketanji Brown Jackson?', k=4, where_str=f"{meta}.doc_id<10") for d, dist in output: print(dist, d.metadata, d.page_content[:20] + '...') 0.252379834651947 {'doc_id': 6, 'some': ''} And I’m taking robus... 0.25022566318511963 {'doc_id': 1, 'some': ''} Groups of citizens b... 0.2469480037689209 {'doc_id': 8, 'some': ''} And so many families... 0.2428302764892578 {'doc_id': 0, 'some': 'metadata'} As Frances Haugen, w... Deleting your data# docsearch.drop() previous Milvus next OpenSearch Contents
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html
348008827e64-3
docsearch.drop() previous Milvus next OpenSearch Contents Setting up envrionments Get connection info and data schema Filtering Deleting your data By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/myscale.html
adc3505b01c8-0
.ipynb .pdf Tair Tair# This notebook shows how to use functionality related to the Tair vector database. To run, you should have an Tair instance up and running. from langchain.embeddings.fake import FakeEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Tair from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = FakeEmbeddings(size=128) Connect to Tair using the TAIR_URL environment variable export TAIR_URL="redis://{username}:{password}@{tair_address}:{tair_port}" or the keyword argument tair_url. Then store documents and embeddings into Tair. tair_url = "redis://localhost:6379" # drop first if index already exists Tair.drop_index(tair_url=tair_url) vector_store = Tair.from_documents( docs, embeddings, tair_url=tair_url ) Query similar documents. query = "What did the president say about Ketanji Brown Jackson" docs = vector_store.similarity_search(query) docs[0]
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/tair.html
adc3505b01c8-1
docs = vector_store.similarity_search(query) docs[0] Document(page_content='We’re going after the criminals who stole billions in relief money meant for small businesses and millions of Americans. \n\nAnd tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nBy the end of this year, the deficit will be down to less than half what it was before I took office. \n\nThe only president ever to cut the deficit by more than one trillion dollars in a single year. \n\nLowering your costs also means demanding more competition. \n\nI’m a capitalist, but capitalism without competition isn’t capitalism. \n\nIt’s exploitation—and it drives up prices. \n\nWhen corporations don’t have to compete, their profits go up, your prices go up, and small businesses and family farmers and ranchers go under. \n\nWe see it happening with ocean carriers moving goods in and out of America. \n\nDuring the pandemic, these foreign-owned companies raised prices by as much as 1,000% and made record profits.', metadata={'source': '../../../state_of_the_union.txt'}) previous SupabaseVectorStore next Weaviate By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/tair.html
e338232575b4-0
.ipynb .pdf Qdrant Contents Connecting to Qdrant from LangChain Local mode In-memory On-disk storage On-premise server deployment Qdrant Cloud Reusing the same collection Similarity search Similarity search with score Maximum marginal relevance search (MMR) Qdrant as a Retriever Customizing Qdrant Qdrant# Qdrant (read: quadrant ) is a vector similarity search engine. It provides a production-ready service with a convenient API to store, search, and manage points - vectors with an additional payload. Qdrant is tailored to extended filtering support. It makes it useful for all sorts of neural network or semantic-based matching, faceted search, and other applications. This notebook shows how to use functionality related to the Qdrant vector database. There are various modes of how to run Qdrant, and depending on the chosen one, there will be some subtle differences. The options include: Local mode, no server required On-premise server deployment Qdrant Cloud See the installation instructions. !pip install qdrant-client We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import os import getpass os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Qdrant from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings()
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html
e338232575b4-1
docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() Connecting to Qdrant from LangChain# Local mode# Python client allows you to run the same code in local mode without running the Qdrant server. That’s great for testing things out and debugging or if you plan to store just a small amount of vectors. The embeddings might be fully kepy in memory or persisted on disk. In-memory# For some testing scenarios and quick experiments, you may prefer to keep all the data in memory only, so it gets lost when the client is destroyed - usually at the end of your script/notebook. qdrant = Qdrant.from_documents( docs, embeddings, location=":memory:", # Local mode with in-memory storage only collection_name="my_documents", ) On-disk storage# Local mode, without using the Qdrant server, may also store your vectors on disk so they’re persisted between runs. qdrant = Qdrant.from_documents( docs, embeddings, path="/tmp/local_qdrant", collection_name="my_documents", ) On-premise server deployment# No matter if you choose to launch Qdrant locally with a Docker container, or select a Kubernetes deployment with the official Helm chart, the way you’re going to connect to such an instance will be identical. You’ll need to provide a URL pointing to the service. url = "<---qdrant url here --->" qdrant = Qdrant.from_documents( docs, embeddings, url, prefer_grpc=True, collection_name="my_documents", ) Qdrant Cloud#
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html
e338232575b4-2
collection_name="my_documents", ) Qdrant Cloud# If you prefer not to keep yourself busy with managing the infrastructure, you can choose to set up a fully-managed Qdrant cluster on Qdrant Cloud. There is a free forever 1GB cluster included for trying out. The main difference with using a managed version of Qdrant is that you’ll need to provide an API key to secure your deployment from being accessed publicly. url = "<---qdrant cloud cluster url here --->" api_key = "<---api key here--->" qdrant = Qdrant.from_documents( docs, embeddings, url, prefer_grpc=True, api_key=api_key, collection_name="my_documents", ) Reusing the same collection# Both Qdrant.from_texts and Qdrant.from_documents methods are great to start using Qdrant with LangChain, but they are going to destroy the collection and create it from scratch! If you want to reuse the existing collection, you can always create an instance of Qdrant on your own and pass the QdrantClient instance with the connection details. del qdrant import qdrant_client client = qdrant_client.QdrantClient( path="/tmp/local_qdrant", prefer_grpc=True ) qdrant = Qdrant( client=client, collection_name="my_documents", embedding_function=embeddings.embed_query ) Similarity search# The simplest scenario for using Qdrant vector store is to perform a similarity search. Under the hood, our query will be encoded with the embedding_function and used to find similar documents in Qdrant collection. query = "What did the president say about Ketanji Brown Jackson" found_docs = qdrant.similarity_search(query)
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html
e338232575b4-3
found_docs = qdrant.similarity_search(query) print(found_docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Similarity search with score# Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result. query = "What did the president say about Ketanji Brown Jackson" found_docs = qdrant.similarity_search_with_score(query) document, score = found_docs[0] print(document.page_content) print(f"\nScore: {score}") Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html
e338232575b4-4
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Score: 0.8153784913324512 Maximum marginal relevance search (MMR)# If you’d like to look up for some similar documents, but you’d also like to receive diverse results, MMR is method you should consider. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. query = "What did the president say about Ketanji Brown Jackson" found_docs = qdrant.max_marginal_relevance_search(query, k=2, fetch_k=10) for i, doc in enumerate(found_docs): print(f"{i + 1}.", doc.page_content, "\n") 1. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html
e338232575b4-5
2. We can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. I recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. Officer Mora was 27 years old. Officer Rivera was 22. Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety. Qdrant as a Retriever# Qdrant, as all the other vector stores, is a LangChain Retriever, by using cosine similarity. retriever = qdrant.as_retriever() retriever VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='similarity', search_kwargs={}) It might be also specified to use MMR as a search strategy, instead of similarity. retriever = qdrant.as_retriever(search_type="mmr") retriever VectorStoreRetriever(vectorstore=<langchain.vectorstores.qdrant.Qdrant object at 0x7fc4e5720a00>, search_type='mmr', search_kwargs={})
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html
e338232575b4-6
query = "What did the president say about Ketanji Brown Jackson" retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}) Customizing Qdrant# Qdrant stores your vector embeddings along with the optional JSON-like payload. Payloads are optional, but since LangChain assumes the embeddings are generated from the documents, we keep the context data, so you can extract the original texts as well. By default, your document is going to be stored in the following payload structure: { "page_content": "Lorem ipsum dolor sit amet", "metadata": { "foo": "bar" } } You can, however, decide to use different keys for the page content and metadata. That’s useful if you already have a collection that you’d like to reuse. You can always change the Qdrant.from_documents( docs, embeddings, location=":memory:", collection_name="my_documents_2",
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html
e338232575b4-7
location=":memory:", collection_name="my_documents_2", content_payload_key="my_page_content_key", metadata_payload_key="my_meta", ) <langchain.vectorstores.qdrant.Qdrant at 0x7fc4e2baa230> previous Pinecone next Redis Contents Connecting to Qdrant from LangChain Local mode In-memory On-disk storage On-premise server deployment Qdrant Cloud Reusing the same collection Similarity search Similarity search with score Maximum marginal relevance search (MMR) Qdrant as a Retriever Customizing Qdrant By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/qdrant.html
fe27a7433977-0
.ipynb .pdf AnalyticDB AnalyticDB# AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online. AnalyticDB for PostgreSQL is developed based on the open source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries. This notebook shows how to use functionality related to the AnalyticDB vector database. To run, you should have an AnalyticDB instance up and running: Using AnalyticDB Cloud Vector Database. Click here to fast deploy it. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import AnalyticDB Split documents and get embeddings by call OpenAI API from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() Connect to AnalyticDB by setting related ENVIRONMENTS. export PG_HOST={your_analyticdb_hostname} export PG_PORT={your_analyticdb_port} # Optional, default is 5432 export PG_DATABASE={your_database} # Optional, default is postgres export PG_USER={database_username} export PG_PASSWORD={database_password} Then store your embeddings and documents into AnalyticDB import os connection_string = AnalyticDB.connection_string_from_db_params(
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/analyticdb.html
fe27a7433977-1
import os connection_string = AnalyticDB.connection_string_from_db_params( driver=os.environ.get("PG_DRIVER", "psycopg2cffi"), host=os.environ.get("PG_HOST", "localhost"), port=int(os.environ.get("PG_PORT", "5432")), database=os.environ.get("PG_DATABASE", "postgres"), user=os.environ.get("PG_USER", "postgres"), password=os.environ.get("PG_PASSWORD", "postgres"), ) vector_db = AnalyticDB.from_documents( docs, embeddings, connection_string= connection_string, ) Query and retrieve data query = "What did the president say about Ketanji Brown Jackson" docs = vector_db.similarity_search(query) print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. previous Getting Started next Annoy By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/analyticdb.html
88595ad5f89f-0
.ipynb .pdf Deep Lake Contents Retrieval Question/Answering Attribute based filtering in metadata Choosing distance function Maximal Marginal relevance Delete dataset Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or local Creating dataset on AWS S3 Deep Lake API Transfer local dataset to cloud Deep Lake# Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes. This notebook showcases basic functionality related to Deep Lake. While Deep Lake can store embeddings, it is capable of storing any type of data. It is a fully fledged serverless data lake with version control, query engine and streaming dataloader to deep learning frameworks. For more information, please see the Deep Lake documentation or api reference !pip install openai deeplake tiktoken from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import DeepLake import os import getpass os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') embeddings = OpenAIEmbeddings() from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() Creates a dataset locally at ./deeplake/, then runs similiarity search db = DeepLake(dataset_path="./my_deeplake/", embedding_function=embeddings) db.add_documents(docs)
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-1
db.add_documents(docs) # or shorter # db = DeepLake.from_documents(docs, dataset_path="./my_deeplake/", embedding=embeddings, overwrite=True) query = "What did the president say about Ketanji Brown Jackson" docs = db.similarity_search(query) /home/leo/.local/lib/python3.10/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.3.2) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn( ./my_deeplake/ loaded successfully. Evaluating ingest: 100%|██████████████████████████████████████| 1/1 [00:07<00:00 Dataset(path='./my_deeplake/', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (42, 1536) float32 None ids text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-2
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Later, you can reload the dataset without recomputing embeddings db = DeepLake(dataset_path="./my_deeplake/", embedding_function=embeddings, read_only=True) docs = db.similarity_search(query) ./my_deeplake/ loaded successfully. Deep Lake Dataset in ./my_deeplake/ already exists, loading from the storage Dataset(path='./my_deeplake/', read_only=True, tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (42, 1536) float32 None ids text (42, 1) str None metadata json (42, 1) str None text text (42, 1) str None Deep Lake, for now, is single writer and multiple reader. Setting read_only=True helps to avoid acquring the writer lock. Retrieval Question/Answering# from langchain.chains import RetrievalQA from langchain.llms import OpenAIChat qa = RetrievalQA.from_chain_type(llm=OpenAIChat(model='gpt-3.5-turbo'), chain_type='stuff', retriever=db.as_retriever())
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-3
/home/leo/.local/lib/python3.10/site-packages/langchain/llms/openai.py:624: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI` warnings.warn( query = 'What did the president say about Ketanji Brown Jackson' qa.run(query) 'The president nominated Ketanji Brown Jackson to serve on the United States Supreme Court. He described her as a former top litigator in private practice, a former federal public defender, a consensus builder, and from a family of public school educators and police officers. He also mentioned that she has received broad support from various groups since being nominated.' Attribute based filtering in metadata# import random for d in docs: d.metadata['year'] = random.randint(2012, 2014) db = DeepLake.from_documents(docs, embeddings, dataset_path="./my_deeplake/", overwrite=True) ./my_deeplake/ loaded successfully. Evaluating ingest: 100%|██████████| 1/1 [00:04<00:00 Dataset(path='./my_deeplake/', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (4, 1536) float32 None ids text (4, 1) str None metadata json (4, 1) str None text text (4, 1) str None db.similarity_search('What did the president say about Ketanji Brown Jackson', filter={'year': 2013})
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-4
100%|██████████| 4/4 [00:00<00:00, 1080.24it/s] [Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-5
Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013})] Choosing distance function# Distance function L2 for Euclidean, L1 for Nuclear, Max l-infinity distnace, cos for cosine similarity, dot for dot product db.similarity_search('What did the president say about Ketanji Brown Jackson?', distance_metric='cos')
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-6
[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-7
Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}),
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-8
Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-9
Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012})] Maximal Marginal relevance# Using maximal marginal relevance db.max_marginal_relevance_search('What did the president say about Ketanji Brown Jackson?')
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-10
[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-11
Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}),
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-12
Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}),
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-13
Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013})] Delete dataset# db.delete_dataset() and if delete fails you can also force delete DeepLake.force_delete_by_path("./my_deeplake") Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or local# By default deep lake datasets are stored in memory, in case you want to persist locally or to any object storage you can simply provide path to the dataset. You can retrieve token from app.activeloop.ai os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:') # Embed and store the texts username = "<username>" # your username on app.activeloop.ai
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-14
username = "<username>" # your username on app.activeloop.ai dataset_path = f"hub://{username}/langchain_test" # could be also ./local/path (much faster locally), s3://bucket/path/to/dataset, gcs://path/to/dataset, etc. embedding = OpenAIEmbeddings() db = DeepLake(dataset_path=dataset_path, embedding_function=embeddings, overwrite=True) db.add_documents(docs) Your Deep Lake dataset has been successfully created! The dataset is private so make sure you are logged in! This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test hub://davitbun/langchain_test loaded successfully. Evaluating ingest: 100%|██████████| 1/1 [00:14<00:00 Dataset(path='hub://davitbun/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (4, 1536) float32 None ids text (4, 1) str None metadata json (4, 1) str None text text (4, 1) str None ['d6d6ccb4-e187-11ed-b66d-41c5f7b85421', 'd6d6ccb5-e187-11ed-b66d-41c5f7b85421', 'd6d6ccb6-e187-11ed-b66d-41c5f7b85421',
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-15
'd6d6ccb7-e187-11ed-b66d-41c5f7b85421'] query = "What did the president say about Ketanji Brown Jackson" docs = db.similarity_search(query) print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Creating dataset on AWS S3# dataset_path = f"s3://BUCKET/langchain_test" # could be also ./local/path (much faster locally), hub://bucket/path/to/dataset, gcs://path/to/dataset, etc. embedding = OpenAIEmbeddings() db = DeepLake.from_documents(docs, dataset_path=dataset_path, embedding=embeddings, overwrite=True, creds = { 'aws_access_key_id': os.environ['AWS_ACCESS_KEY_ID'], 'aws_secret_access_key': os.environ['AWS_SECRET_ACCESS_KEY'], 'aws_session_token': os.environ['AWS_SESSION_TOKEN'], # Optional }) s3://hub-2.0-datasets-n/langchain_test loaded successfully.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-16
}) s3://hub-2.0-datasets-n/langchain_test loaded successfully. Evaluating ingest: 100%|██████████| 1/1 [00:10<00:00 \ Dataset(path='s3://hub-2.0-datasets-n/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (4, 1536) float32 None ids text (4, 1) str None metadata json (4, 1) str None text text (4, 1) str None Deep Lake API# you can access the Deep Lake dataset at db.ds # get structure of the dataset db.ds.summary() Dataset(path='hub://davitbun/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (4, 1536) float32 None ids text (4, 1) str None metadata json (4, 1) str None text text (4, 1) str None # get embeddings numpy array embeds = db.ds.embedding.numpy() Transfer local dataset to cloud# Copy already created dataset to the cloud. You can also transfer from cloud to local. import deeplake username = "davitbun" # your username on app.activeloop.ai
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-17
username = "davitbun" # your username on app.activeloop.ai source = f"hub://{username}/langchain_test" # could be local, s3, gcs, etc. destination = f"hub://{username}/langchain_test_copy" # could be local, s3, gcs, etc. deeplake.deepcopy(src=source, dest=destination, overwrite=True) Copying dataset: 100%|██████████| 56/56 [00:38<00:00 This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy Your Deep Lake dataset has been successfully created! The dataset is private so make sure you are logged in! Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text']) db = DeepLake(dataset_path=destination, embedding_function=embeddings) db.add_documents(docs) This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy / hub://davitbun/langchain_test_copy loaded successfully. Deep Lake Dataset in hub://davitbun/langchain_test_copy already exists, loading from the storage Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (4, 1536) float32 None ids text (4, 1) str None metadata json (4, 1) str None
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
88595ad5f89f-18
metadata json (4, 1) str None text text (4, 1) str None Evaluating ingest: 100%|██████████| 1/1 [00:31<00:00 - Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding generic (8, 1536) float32 None ids text (8, 1) str None metadata json (8, 1) str None text text (8, 1) str None ['ad42f3fe-e188-11ed-b66d-41c5f7b85421', 'ad42f3ff-e188-11ed-b66d-41c5f7b85421', 'ad42f400-e188-11ed-b66d-41c5f7b85421', 'ad42f401-e188-11ed-b66d-41c5f7b85421'] previous Chroma next ElasticSearch Contents Retrieval Question/Answering Attribute based filtering in metadata Choosing distance function Maximal Marginal relevance Delete dataset Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or local Creating dataset on AWS S3 Deep Lake API Transfer local dataset to cloud By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html
f680ef7b7274-0
.ipynb .pdf Weaviate Weaviate# Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML-models, and scale seamlessly into billions of data objects. This notebook shows how to use functionality related to the Weaviatevector database. See the Weaviate installation instructions. !pip install weaviate-client We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import os import getpass os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') WEAVIATE_URL = getpass.getpass('WEAVIATE_URL:') os.environ['WEAVIATE_API_KEY'] = getpass.getpass('WEAVIATE_API_KEY:') from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Weaviate from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() import weaviate import os WEAVIATE_URL = "" client = weaviate.Client( url=WEAVIATE_URL, additional_headers={ 'X-OpenAI-Api-Key': os.environ["OPENAI_API_KEY"] } ) client.schema.delete_all() client.schema.get() schema = { "classes": [ { "class": "Paragraph", "description": "A written paragraph",
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
f680ef7b7274-1
{ "class": "Paragraph", "description": "A written paragraph", "vectorizer": "text2vec-openai", "moduleConfig": { "text2vec-openai": { "model": "ada", "modelVersion": "002", "type": "text" } }, "properties": [ { "dataType": ["text"], "description": "The content of the paragraph", "moduleConfig": { "text2vec-openai": { "skip": False, "vectorizePropertyName": False } }, "name": "content", }, ], }, ] } client.schema.create(schema) vectorstore = Weaviate(client, "Paragraph", "content") query = "What did the president say about Ketanji Brown Jackson" docs = vectorstore.similarity_search(query) print(docs[0].page_content) previous Tair next Zilliz By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/weaviate.html
eae724cc0ec9-0
.ipynb .pdf OpenSearch Contents similarity_search using Approximate k-NN Search with Custom Parameters similarity_search using Script Scoring with Custom Parameters similarity_search using Painless Scripting with Custom Parameters Using a preexisting OpenSearch instance OpenSearch# OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, and observability applications licensed under Apache 2.0. OpenSearch is a distributed search and analytics engine based on Apache Lucene. This notebook shows how to use functionality related to the OpenSearch database. To run, you should have the opensearch instance up and running: here similarity_search by default performs the Approximate k-NN Search which uses one of the several algorithms like lucene, nmslib, faiss recommended for large datasets. To perform brute force search we have other search methods known as Script Scoring and Painless Scripting. Check this for more details. !pip install opensearch-py We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import os import getpass os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import OpenSearchVectorSearch from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings()
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html
eae724cc0ec9-1
docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200") query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) print(docs[0].page_content) similarity_search using Approximate k-NN Search with Custom Parameters# docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", engine="faiss", space_type="innerproduct", ef_construction=256, m=48) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search(query) print(docs[0].page_content) similarity_search using Script Scoring with Custom Parameters# docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False) query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search("What did the president say about Ketanji Brown Jackson", k=1, search_type="script_scoring") print(docs[0].page_content) similarity_search using Painless Scripting with Custom Parameters# docsearch = OpenSearchVectorSearch.from_documents(docs, embeddings, opensearch_url="http://localhost:9200", is_appx_search=False) filter = {"bool": {"filter": {"term": {"text": "smuggling"}}}} query = "What did the president say about Ketanji Brown Jackson"
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html
eae724cc0ec9-2
query = "What did the president say about Ketanji Brown Jackson" docs = docsearch.similarity_search("What did the president say about Ketanji Brown Jackson", search_type="painless_scripting", space_type="cosineSimilarity", pre_filter=filter) print(docs[0].page_content) Using a preexisting OpenSearch instance# It’s also possible to use a preexisting OpenSearch instance with documents that already have vectors present. # this is just an example, you would need to change these values to point to another opensearch instance docsearch = OpenSearchVectorSearch(index_name="index-*", embedding_function=embeddings, opensearch_url="http://localhost:9200") # you can specify custom field names to match the fields you're using to store your embedding, document text value, and metadata docs = docsearch.similarity_search("Who was asking about getting lunch today?", search_type="script_scoring", space_type="cosinesimil", vector_field="message_embedding", text_field="message", metadata_field="message_metadata") previous MyScale next PGVector Contents similarity_search using Approximate k-NN Search with Custom Parameters similarity_search using Script Scoring with Custom Parameters similarity_search using Painless Scripting with Custom Parameters Using a preexisting OpenSearch instance By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/opensearch.html
1dbd8201331c-0
.ipynb .pdf Redis Contents RedisVectorStoreRetriever Redis# Redis (Remote Dictionary Server) is an in-memory data structure store, used as a distributed, in-memory key–value database, cache and message broker, with optional durability. This notebook shows how to use functionality related to the Redis vector database. !pip install redis We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import os import getpass os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') from langchain.embeddings import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.redis import Redis from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='link') rds.index_name 'link' query = "What did the president say about Ketanji Brown Jackson" results = rds.similarity_search(query) print(results[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html
1dbd8201331c-1
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. print(rds.add_texts(["Ankush went to Princeton"])) ['doc:link:d7d02e3faf1b40bbbe29a683ff75b280'] query = "Princeton" results = rds.similarity_search(query) print(results[0].page_content) Ankush went to Princeton # Load from existing index rds = Redis.from_existing_index(embeddings, redis_url="redis://localhost:6379", index_name='link') query = "What did the president say about Ketanji Brown Jackson" results = rds.similarity_search(query) print(results[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. RedisVectorStoreRetriever#
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html
1dbd8201331c-2
RedisVectorStoreRetriever# Here we go over different options for using the vector store as a retriever. There are three different search methods we can use to do retrieval. By default, it will use semantic similarity. retriever = rds.as_retriever() docs = retriever.get_relevant_documents(query) We can also use similarity_limit as a search method. This is only return documents if they are similar enough retriever = rds.as_retriever(search_type="similarity_limit") # Here we can see it doesn't return any results because there are no relevant documents retriever.get_relevant_documents("where did ankush go to college?") previous Qdrant next SupabaseVectorStore Contents RedisVectorStoreRetriever By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/redis.html
e3c94cbc6d50-0
.ipynb .pdf ElasticSearch Contents Installation Example ElasticSearch# Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. This notebook shows how to use functionality related to the Elasticsearch database. Installation# Check out Elasticsearch installation instructions. To connect to an Elasticsearch instance that does not require login credentials, pass the Elasticsearch URL and index name along with the embedding object to the constructor. Example: from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticVectorSearch( elasticsearch_url="http://localhost:9200", index_name="test_index", embedding=embedding ) To connect to an Elasticsearch instance that requires login credentials, including Elastic Cloud, use the Elasticsearch URL format https://username:password@es_host:9243. For example, to connect to Elastic Cloud, create the Elasticsearch URL with the required authentication details and pass it to the ElasticVectorSearch constructor as the named parameter elasticsearch_url. You can obtain your Elastic Cloud URL and login credentials by logging in to the Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and navigating to the “Deployments” page. To obtain your Elastic Cloud password for the default “elastic” user: Log in to the Elastic Cloud console at https://cloud.elastic.co Go to “Security” > “Users” Locate the “elastic” user and click “Edit” Click “Reset password” Follow the prompts to reset the password Format for Elastic Cloud URLs is
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html
e3c94cbc6d50-1
Click “Reset password” Follow the prompts to reset the password Format for Elastic Cloud URLs is https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243. Example: from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_host = "cluster_id.region_id.gcp.cloud.es.io" elasticsearch_url = f"https://username:password@{elastic_host}:9243" elastic_vector_search = ElasticVectorSearch( elasticsearch_url=elasticsearch_url, index_name="test_index", embedding=embedding ) !pip install elasticsearch import os import getpass os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') Example# from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import ElasticVectorSearch from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = ElasticVectorSearch.from_documents(docs, embeddings, elasticsearch_url="http://localhost:9200") query = "What did the president say about Ketanji Brown Jackson" docs = db.similarity_search(query) print(docs[0].page_content) In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. We cannot let this happen.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html
e3c94cbc6d50-2
We cannot let this happen. Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. previous Deep Lake next FAISS Contents Installation Example By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/elasticsearch.html
0d08186f0734-0
.ipynb .pdf Milvus Milvus# Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models. This notebook shows how to use functionality related to the Milvus vector database. To run, you should have a Milvus instance up and running. !pip install pymilvus We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import os import getpass os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Milvus from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vector_db = Milvus.from_documents( docs, embeddings, connection_args={"host": "127.0.0.1", "port": "19530"}, ) docs = vector_db.similarity_search(query) docs[0] previous LanceDB next MyScale By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/milvus.html
d518da70349e-0
.ipynb .pdf Chroma Contents Similarity search with score Persistance Initialize PeristedChromaDB Persist the Database Load the Database from disk, and create the chain Retriever options MMR Chroma# Chroma is a database for building AI applications with embeddings. This notebook shows how to use functionality related to the Chroma vector database. !pip install chromadb # get a token: https://platform.openai.com/account/api-keys from getpass import getpass OPENAI_API_KEY = getpass() import os os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import Chroma from langchain.document_loaders import TextLoader from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = Chroma.from_documents(docs, embeddings) query = "What did the president say about Ketanji Brown Jackson" docs = db.similarity_search(query) Using embedded DuckDB without persistence: data will be transient print(docs[0].page_content) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html
d518da70349e-1
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. Similarity search with score# docs = db.similarity_search_with_score(query) docs[0] (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 0.3949805498123169) Persistance# The below steps cover how to persist a ChromaDB instance Initialize PeristedChromaDB#
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html
d518da70349e-2
The below steps cover how to persist a ChromaDB instance Initialize PeristedChromaDB# Create embeddings for each chunk and insert into the Chroma vector database. The persist_directory argument tells ChromaDB where to store the database when it’s persisted. # Embed and store the texts # Supplying a persist_directory will store the embeddings on disk persist_directory = 'db' embedding = OpenAIEmbeddings() vectordb = Chroma.from_documents(documents=docs, embedding=embedding, persist_directory=persist_directory) Running Chroma using direct local API. No existing DB found in db, skipping load No existing DB found in db, skipping load Persist the Database# We should call persist() to ensure the embeddings are written to disk. vectordb.persist() vectordb = None Persisting DB to disk, putting it in the save folder db PersistentDuckDB del, about to run persist Persisting DB to disk, putting it in the save folder db Load the Database from disk, and create the chain# Be sure to pass the same persist_directory and embedding_function as you did when you instantiated the database. Initialize the chain we will use for question answering. # Now we can load the persisted database from disk, and use it as normal. vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding) Running Chroma using direct local API. loaded in 4 embeddings loaded in 1 collections Retriever options# This section goes over different options for how to use Chroma as a retriever. MMR# In addition to using similarity search in the retriever object, you can also use mmr. retriever = db.as_retriever(search_type="mmr") retriever.get_relevant_documents(query)[0]
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html
d518da70349e-3
retriever.get_relevant_documents(query)[0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}) previous AtlasDB next Deep Lake Contents Similarity search with score Persistance Initialize PeristedChromaDB Persist the Database Load the Database from disk, and create the chain Retriever options MMR By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html
5d04a524fa9c-0
.ipynb .pdf Annoy Contents Create VectorStore from texts Create VectorStore from docs Create VectorStore via existing embeddings Search via embeddings Search via docstore id Save and load Construct from scratch Annoy# “Annoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.” This notebook shows how to use functionality related to the Annoy vector database. via Annoy Note NOTE: Annoy is read-only - once the index is built you cannot add any more emebddings! If you want to progressively add new entries to your VectorStore then better choose an alternative! Create VectorStore from texts# from langchain.embeddings import HuggingFaceEmbeddings from langchain.vectorstores import Annoy embeddings_func = HuggingFaceEmbeddings() texts = ["pizza is great", "I love salad", "my car", "a dog"] # default metric is angular vector_store = Annoy.from_texts(texts, embeddings_func) # allows for custom annoy parameters, defaults are n_trees=100, n_jobs=-1, metric="angular" vector_store_v2 = Annoy.from_texts( texts, embeddings_func, metric="dot", n_trees=100, n_jobs=1 ) vector_store.similarity_search("food", k=3) [Document(page_content='pizza is great', metadata={}), Document(page_content='I love salad', metadata={}), Document(page_content='my car', metadata={})] # the score is a distance metric, so lower is better
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
5d04a524fa9c-1
# the score is a distance metric, so lower is better vector_store.similarity_search_with_score("food", k=3) [(Document(page_content='pizza is great', metadata={}), 1.0944390296936035), (Document(page_content='I love salad', metadata={}), 1.1273186206817627), (Document(page_content='my car', metadata={}), 1.1580758094787598)] Create VectorStore from docs# from langchain.document_loaders import TextLoader from langchain.text_splitter import CharacterTextSplitter loader = TextLoader("../../../state_of_the_union.txt") documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) docs[:5]
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
5d04a524fa9c-2
docs = text_splitter.split_documents(documents) docs[:5] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.', metadata={'source': '../../../state_of_the_union.txt'}),
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
5d04a524fa9c-3
Document(page_content='Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n\nIn this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. \n\nLet each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n\nPlease rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n\nThroughout our history we’ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n\nThey keep moving. \n\nAnd the costs and the threats to America and the world keep rising. \n\nThat’s why the NATO Alliance was created to secure peace and stability in Europe after World War 2. \n\nThe United States is a member along with 29 other nations. \n\nIt matters. American diplomacy matters. American resolve matters.', metadata={'source': '../../../state_of_the_union.txt'}),
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
5d04a524fa9c-4
Document(page_content='Putin’s latest attack on Ukraine was premeditated and unprovoked. \n\nHe rejected repeated efforts at diplomacy. \n\nHe thought the West and NATO wouldn’t respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n\nWe prepared extensively and carefully. \n\nWe spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. \n\nI spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \n\nWe countered Russia’s lies with truth. \n\nAnd now that he has acted the free world is holding him accountable. \n\nAlong with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland.', metadata={'source': '../../../state_of_the_union.txt'}),
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
5d04a524fa9c-5
Document(page_content='We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n\nTogether with our allies –we are right now enforcing powerful economic sanctions. \n\nWe are cutting off Russia’s largest banks from the international financial system. \n\nPreventing Russia’s central bank from defending the Russian Ruble making Putin’s $630 Billion “war fund” worthless. \n\nWe are choking off Russia’s access to technology that will sap its economic strength and weaken its military for years to come. \n\nTonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n\nThe U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n\nWe are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.', metadata={'source': '../../../state_of_the_union.txt'}),
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
5d04a524fa9c-6
Document(page_content='And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights – further isolating Russia – and adding an additional squeeze –on their economy. The Ruble has lost 30% of its value. \n\nThe Russian stock market has lost 40% of its value and trading remains suspended. Russia’s economy is reeling and Putin alone is to blame. \n\nTogether with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \n\nWe are giving more than $1 Billion in direct assistance to Ukraine. \n\nAnd we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \n\nLet me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine. \n\nOur forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies – in the event that Putin decides to keep moving west.', metadata={'source': '../../../state_of_the_union.txt'})] vector_store_from_docs = Annoy.from_documents(docs, embeddings_func) query = "What did the president say about Ketanji Brown Jackson" docs = vector_store_from_docs.similarity_search(query) print(docs[0].page_content[:100]) Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Ac Create VectorStore via existing embeddings# embs = embeddings_func.embed_documents(texts) data = list(zip(texts, embs)) vector_store_from_embeddings = Annoy.from_embeddings(data, embeddings_func) vector_store_from_embeddings.similarity_search_with_score("food", k=3) [(Document(page_content='pizza is great', metadata={}), 1.0944390296936035),
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
5d04a524fa9c-7
(Document(page_content='I love salad', metadata={}), 1.1273186206817627), (Document(page_content='my car', metadata={}), 1.1580758094787598)] Search via embeddings# motorbike_emb = embeddings_func.embed_query("motorbike") vector_store.similarity_search_by_vector(motorbike_emb, k=3) [Document(page_content='my car', metadata={}), Document(page_content='a dog', metadata={}), Document(page_content='pizza is great', metadata={})] vector_store.similarity_search_with_score_by_vector(motorbike_emb, k=3) [(Document(page_content='my car', metadata={}), 1.0870471000671387), (Document(page_content='a dog', metadata={}), 1.2095637321472168), (Document(page_content='pizza is great', metadata={}), 1.3254905939102173)] Search via docstore id# vector_store.index_to_docstore_id {0: '2d1498a8-a37c-4798-acb9-0016504ed798', 1: '2d30aecc-88e0-4469-9d51-0ef7e9858e6d', 2: '927f1120-985b-4691-b577-ad5cb42e011c', 3: '3056ddcf-a62f-48c8-bd98-b9e57a3dfcae'} some_docstore_id = 0 # texts[0] vector_store.docstore._dict[vector_store.index_to_docstore_id[some_docstore_id]] Document(page_content='pizza is great', metadata={}) # same document has distance 0
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
5d04a524fa9c-8
Document(page_content='pizza is great', metadata={}) # same document has distance 0 vector_store.similarity_search_with_score_by_index(some_docstore_id, k=3) [(Document(page_content='pizza is great', metadata={}), 0.0), (Document(page_content='I love salad', metadata={}), 1.0734446048736572), (Document(page_content='my car', metadata={}), 1.2895267009735107)] Save and load# vector_store.save_local("my_annoy_index_and_docstore") saving config loaded_vector_store = Annoy.load_local( "my_annoy_index_and_docstore", embeddings=embeddings_func ) # same document has distance 0 loaded_vector_store.similarity_search_with_score_by_index(some_docstore_id, k=3) [(Document(page_content='pizza is great', metadata={}), 0.0), (Document(page_content='I love salad', metadata={}), 1.0734446048736572), (Document(page_content='my car', metadata={}), 1.2895267009735107)] Construct from scratch# import uuid from annoy import AnnoyIndex from langchain.docstore.document import Document from langchain.docstore.in_memory import InMemoryDocstore metadatas = [{"x": "food"}, {"x": "food"}, {"x": "stuff"}, {"x": "animal"}] # embeddings embeddings = embeddings_func.embed_documents(texts) # embedding dim f = len(embeddings[0]) # index metric = "angular" index = AnnoyIndex(f, metric=metric) for i, emb in enumerate(embeddings): index.add_item(i, emb) index.build(10) # docstore documents = []
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
5d04a524fa9c-9
index.build(10) # docstore documents = [] for i, text in enumerate(texts): metadata = metadatas[i] if metadatas else {} documents.append(Document(page_content=text, metadata=metadata)) index_to_docstore_id = {i: str(uuid.uuid4()) for i in range(len(documents))} docstore = InMemoryDocstore( {index_to_docstore_id[i]: doc for i, doc in enumerate(documents)} ) db_manually = Annoy( embeddings_func.embed_query, index, metric, docstore, index_to_docstore_id ) db_manually.similarity_search_with_score("eating!", k=3) [(Document(page_content='pizza is great', metadata={'x': 'food'}), 1.1314140558242798), (Document(page_content='I love salad', metadata={'x': 'food'}), 1.1668788194656372), (Document(page_content='my car', metadata={'x': 'stuff'}), 1.226445198059082)] previous AnalyticDB next AtlasDB Contents Create VectorStore from texts Create VectorStore from docs Create VectorStore via existing embeddings Search via embeddings Search via docstore id Save and load Construct from scratch By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/annoy.html
8c60ce1da1e8-0
.ipynb .pdf PGVector Contents Similarity search with score Similarity Search with Euclidean Distance (Default) PGVector# PGVector is an open-source vector similarity search for Postgres It supports: exact and approximate nearest neighbor search L2 distance, inner product, and cosine distance This notebook shows how to use the Postgres vector database (PGVector). See the installation instruction. !pip install pgvector We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import os import getpass os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') ## Loading Environment Variables from typing import List, Tuple from dotenv import load_dotenv load_dotenv() False from langchain.embeddings.openai import OpenAIEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.pgvector import PGVector from langchain.document_loaders import TextLoader from langchain.docstore.document import Document loader = TextLoader('../../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() ## PGVector needs the connection string to the database. ## We will load it from the environment variables. import os CONNECTION_STRING = PGVector.connection_string_from_db_params( driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"), host=os.environ.get("PGVECTOR_HOST", "localhost"), port=int(os.environ.get("PGVECTOR_PORT", "5432")), database=os.environ.get("PGVECTOR_DATABASE", "postgres"), user=os.environ.get("PGVECTOR_USER", "postgres"),
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html
8c60ce1da1e8-1
user=os.environ.get("PGVECTOR_USER", "postgres"), password=os.environ.get("PGVECTOR_PASSWORD", "postgres"), ) ## Example # postgresql+psycopg2://username:password@localhost:5432/database_name Similarity search with score# Similarity Search with Euclidean Distance (Default)# # The PGVector Module will try to create a table with the name of the collection. So, make sure that the collection name is unique and the user has the # permission to create a table. db = PGVector.from_documents( embedding=embeddings, documents=docs, collection_name="state_of_the_union", connection_string=CONNECTION_STRING, ) query = "What did the president say about Ketanji Brown Jackson" docs_with_score: List[Tuple[Document, float]] = db.similarity_search_with_score(query) for doc, score in docs_with_score: print("-" * 80) print("Score: ", score) print(doc.page_content) print("-" * 80) -------------------------------------------------------------------------------- Score: 0.6076628081132506 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html
8c60ce1da1e8-2
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.6076628081132506 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.6076804780049968 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html
8c60ce1da1e8-3
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.6076804780049968 Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. -------------------------------------------------------------------------------- previous OpenSearch next Pinecone Contents Similarity search with score Similarity Search with Euclidean Distance (Default) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/pgvector.html
3e9bd7a2695c-0
.md .pdf Locally Hosted Setup Contents Installation Environment Setup Locally Hosted Setup# This page contains instructions for installing and then setting up the environment to use the locally hosted version of tracing. Installation# Ensure you have Docker installed (see Get Docker) and that it’s running. Install the latest version of langchain: pip install langchain or pip install langchain -U to upgrade your existing version. Run langchain-server. This command was installed automatically when you ran the above command (pip install langchain). This will spin up the server in the terminal, hosted on port 4137 by default. Once you see the terminal output langchain-langchain-frontend-1 | ➜ Local: [http://localhost:4173/](http://localhost:4173/), navigate to http://localhost:4173/ You should see a page with your tracing sessions. See the overview page for a walkthrough of the UI. Currently, trace data is not guaranteed to be persisted between runs of langchain-server. If you want to persist your data, you can mount a volume to the Docker container. See the Docker docs for more info. To stop the server, press Ctrl+C in the terminal where you ran langchain-server. Environment Setup# After installation, you must now set up your environment to use tracing. This can be done by setting an environment variable in your terminal by running export LANGCHAIN_HANDLER=langchain. You can also do this by adding the below snippet to the top of every script. IMPORTANT: this must go at the VERY TOP of your script, before you import anything from langchain. import os os.environ["LANGCHAIN_HANDLER"] = "langchain" Contents Installation Environment Setup By Harrison Chase © Copyright 2023, Harrison Chase.
https://python.langchain.com/en/latest/tracing/local_installation.html
3e9bd7a2695c-1
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/tracing/local_installation.html
1b6354d46893-0
.ipynb .pdf Tracing Walkthrough Tracing Walkthrough# There are two recommended ways to trace your LangChains: Setting the LANGCHAIN_TRACING environment variable to “true”. Using a context manager with tracing_enabled() to trace a particular block of code. Note if the environment variable is set, all code will be traced, regardless of whether or not it’s within the context manager. import os os.environ["LANGCHAIN_TRACING"] = "true" ## Uncomment below if using hosted setup. # os.environ["LANGCHAIN_ENDPOINT"] = "https://langchain-api-gateway-57eoxz8z.uc.gateway.dev" ## Uncomment below if you want traces to be recorded to "my_session" instead of "default". # os.environ["LANGCHAIN_SESSION"] = "my_session" ## Better to set this environment variable in the terminal ## Uncomment below if using hosted version. Replace "my_api_key" with your actual API Key. # os.environ["LANGCHAIN_API_KEY"] = "my_api_key" import langchain from langchain.agents import Tool, initialize_agent, load_tools from langchain.agents import AgentType from langchain.callbacks import tracing_enabled from langchain.chat_models import ChatOpenAI from langchain.llms import OpenAI # Agent run with tracing. Ensure that OPENAI_API_KEY is set appropriately to run this example. llm = OpenAI(temperature=0) tools = load_tools(["llm-math"], llm=llm) agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True ) agent.run("What is 2 raised to .123243 power?") > Entering new AgentExecutor chain... I need to use a calculator to solve this. Action: Calculator
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
1b6354d46893-1
I need to use a calculator to solve this. Action: Calculator Action Input: 2^.123243 Observation: Answer: 1.0891804557407723 Thought: I now know the final answer. Final Answer: 1.0891804557407723 > Finished chain. '1.0891804557407723' # Agent run with tracing using a chat model agent = initialize_agent( tools, ChatOpenAI(temperature=0), agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True ) agent.run("What is 2 raised to .123243 power?") > Entering new AgentExecutor chain... I need to use a calculator to solve this. Action: Calculator Action Input: 2 ^ .123243 Observation: Answer: 1.0891804557407723 Thought:I now know the answer to the question. Final Answer: 1.0891804557407723 > Finished chain. '1.0891804557407723' # Both of the agent runs will be traced because the environment variable is set agent.run("What is 2 raised to .123243 power?") with tracing_enabled() as session: agent.run("What is 5 raised to .123243 power?") > Entering new AgentExecutor chain... I need to use a calculator to solve this. Action: Calculator Action Input: 2 ^ .123243 Observation: Answer: 1.0891804557407723 Thought:I now know the answer to the question. Final Answer: 1.0891804557407723 > Finished chain. > Entering new AgentExecutor chain... I need to use a calculator to solve this. Action: Calculator Action Input: 5 ^ .123243
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
1b6354d46893-2
Action: Calculator Action Input: 5 ^ .123243 Observation: Answer: 1.2193914912400514 Thought:I now know the answer to the question. Final Answer: 1.2193914912400514 > Finished chain. # Now, we unset the environment variable and use a context manager. if "LANGCHAIN_TRACING" in os.environ: del os.environ["LANGCHAIN_TRACING"] # here, we are writing traces to "my_test_session" with tracing_enabled("my_session") as session: assert session agent.run("What is 5 raised to .123243 power?") # this should be traced agent.run("What is 2 raised to .123243 power?") # this should not be traced > Entering new AgentExecutor chain... I need to use a calculator to solve this. Action: Calculator Action Input: 5 ^ .123243 Observation: Answer: 1.2193914912400514 Thought:I now know the answer to the question. Final Answer: 1.2193914912400514 > Finished chain. > Entering new AgentExecutor chain... I need to use a calculator to solve this. Action: Calculator Action Input: 2 ^ .123243 Observation: Answer: 1.0891804557407723 Thought:I now know the answer to the question. Final Answer: 1.0891804557407723 > Finished chain. '1.0891804557407723' # The context manager is concurrency safe: import asyncio if "LANGCHAIN_TRACING" in os.environ: del os.environ["LANGCHAIN_TRACING"]
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
1b6354d46893-3
del os.environ["LANGCHAIN_TRACING"] questions = [f"What is {i} raised to .123 power?" for i in range(1,4)] # start a background task task = asyncio.create_task(agent.arun(questions[0])) # this should not be traced with tracing_enabled() as session: assert session tasks = [agent.arun(q) for q in questions[1:3]] # these should be traced await asyncio.gather(*tasks) await task > Entering new AgentExecutor chain... > Entering new AgentExecutor chain... > Entering new AgentExecutor chain... I need to use a calculator to solve this. Action: Calculator Action Input: 3^0.123I need to use a calculator to solve this. Action: Calculator Action Input: 2^0.123Any number raised to the power of 0 is 1, but I'm not sure about a decimal power. Action: Calculator Action Input: 1^.123 Observation: Answer: 1.1446847956963533 Thought: Observation: Answer: 1.0889970153361064 Thought: Observation: Answer: 1.0 Thought: > Finished chain. > Finished chain. > Finished chain. '1.0' By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/tracing/agent_with_tracing.html
03a0ae1db37c-0
.md .pdf Cloud Hosted Setup Contents Installation Environment Setup Cloud Hosted Setup# We offer a hosted version of tracing at langchainplus.vercel.app. You can use this to view traces from your run without having to run the server locally. Note: we are currently only offering this to a limited number of users. The hosted platform is VERY alpha, in active development, and data might be dropped at any time. Don’t depend on data being persisted in the system long term and don’t log traces that may contain sensitive information. If you’re interested in using the hosted platform, please fill out the form here. Installation# Login to the system and click “API Key” in the top right corner. Generate a new key and keep it safe. You will need it to authenticate with the system. Environment Setup# After installation, you must now set up your environment to use tracing. This can be done by setting an environment variable in your terminal by running export LANGCHAIN_HANDLER=langchain. You can also do this by adding the below snippet to the top of every script. IMPORTANT: this must go at the VERY TOP of your script, before you import anything from langchain. import os os.environ["LANGCHAIN_HANDLER"] = "langchain" You will also need to set an environment variable to specify the endpoint and your API key. This can be done with the following environment variables: LANGCHAIN_ENDPOINT = “https://langchain-api-gateway-57eoxz8z.uc.gateway.dev” LANGCHAIN_API_KEY - set this to the API key you generated during installation. An example of adding all relevant environment variables is below: import os os.environ["LANGCHAIN_HANDLER"] = "langchain" os.environ["LANGCHAIN_ENDPOINT"] = "https://langchain-api-gateway-57eoxz8z.uc.gateway.dev"
https://python.langchain.com/en/latest/tracing/hosted_installation.html
03a0ae1db37c-1
os.environ["LANGCHAIN_API_KEY"] = "my_api_key" # Don't commit this to your repo! Better to set it in your terminal. Contents Installation Environment Setup By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/tracing/hosted_installation.html
2c2ecee42317-0
.md .pdf Extraction Extraction# Conceptual Guide Most APIs and databases still deal with structured information. Therefore, in order to better work with those, it can be useful to extract structured information from text. Examples of this include: Extracting a structured row to insert into a database from a sentence Extracting multiple rows to insert into a database from a long document Extracting the correct API parameters from a user query This work is extremely related to output parsing. Output parsers are responsible for instructing the LLM to respond in a specific format. In this case, the output parsers specify the format of the data you would like to extract from the document. Then, in addition to the output format instructions, the prompt should also contain the data you would like to extract information from. While normal output parsers are good enough for basic structuring of response data, when doing extraction you often want to extract more complicated or nested structures. For a deep dive on extraction, we recommend checking out kor, a library that uses the existing LangChain chain and OutputParser abstractions but deep dives on allowing extraction of more complicated schemas. previous Summarization next Evaluation By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/use_cases/extraction.html
8bbb3f0d176c-0
.md .pdf Querying Tabular Data Contents Document Loading Querying Chains Agents Querying Tabular Data# Conceptual Guide Lots of data and information is stored in tabular data, whether it be csvs, excel sheets, or SQL tables. This page covers all resources available in LangChain for working with data in this format. Document Loading# If you have text data stored in a tabular format, you may want to load the data into a Document and then index it as you would other text/unstructured data. For this, you should use a document loader like the CSVLoader and then you should create an index over that data, and query it that way. Querying# If you have more numeric tabular data, or have a large amount of data and don’t want to index it, you should get started by looking at various chains and agents we have for dealing with this data. Chains# If you are just getting started, and you have relatively small/simple tabular data, you should get started with chains. Chains are a sequence of predetermined steps, so they are good to get started with as they give you more control and let you understand what is happening better. SQL Database Chain Agents# Agents are more complex, and involve multiple queries to the LLM to understand what to do. The downside of agents are that you have less control. The upside is that they are more powerful, which allows you to use them on larger databases and more complex schemas. SQL Agent Pandas Agent CSV Agent previous Chatbots next Code Understanding Contents Document Loading Querying Chains Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/use_cases/tabular.html
1b92829d05fb-0
.md .pdf Interacting with APIs Contents Chains Agents Interacting with APIs# Conceptual Guide Lots of data and information is stored behind APIs. This page covers all resources available in LangChain for working with APIs. Chains# If you are just getting started, and you have relatively simple apis, you should get started with chains. Chains are a sequence of predetermined steps, so they are good to get started with as they give you more control and let you understand what is happening better. API Chain Agents# Agents are more complex, and involve multiple queries to the LLM to understand what to do. The downside of agents are that you have less control. The upside is that they are more powerful, which allows you to use them on larger and more complex schemas. OpenAPI Agent previous Code Understanding next Summarization Contents Chains Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/use_cases/apis.html
e6ae67a9cee5-0
.md .pdf Code Understanding Contents Conversational Retriever Chain Code Understanding# Overview LangChain is a useful tool designed to parse GitHub code repositories. By leveraging VectorStores, Conversational RetrieverChain, and GPT-4, it can answer questions in the context of an entire GitHub repository or generate new code. This documentation page outlines the essential components of the system and guides using LangChain for better code comprehension, contextual question answering, and code generation in GitHub repositories. Conversational Retriever Chain# Conversational RetrieverChain is a retrieval-focused system that interacts with the data stored in a VectorStore. Utilizing advanced techniques, like context-aware filtering and ranking, it retrieves the most relevant code snippets and information for a given user query. Conversational RetrieverChain is engineered to deliver high-quality, pertinent results while considering conversation history and context. LangChain Workflow for Code Understanding and Generation Index the code base: Clone the target repository, load all files within, chunk the files, and execute the indexing process. Optionally, you can skip this step and use an already indexed dataset. Embedding and Code Store: Code snippets are embedded using a code-aware embedding model and stored in a VectorStore. Query Understanding: GPT-4 processes user queries, grasping the context and extracting relevant details. Construct the Retriever: Conversational RetrieverChain searches the VectorStore to identify the most relevant code snippets for a given query. Build the Conversational Chain: Customize the retriever settings and define any user-defined filters as needed. Ask questions: Define a list of questions to ask about the codebase, and then use the ConversationalRetrievalChain to generate context-aware answers. The LLM (GPT-4) generates comprehensive, context-aware answers based on retrieved code snippets and conversation history. The full tutorial is available below.
https://python.langchain.com/en/latest/use_cases/code.html
e6ae67a9cee5-1
The full tutorial is available below. Twitter the-algorithm codebase analysis with Deep Lake: A notebook walking through how to parse github source code and run queries conversation. LangChain codebase analysis with Deep Lake: A notebook walking through how to analyze and do question answering over THIS code base. previous Querying Tabular Data next Interacting with APIs Contents Conversational Retriever Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/use_cases/code.html
7eef92193f4c-0
.md .pdf Summarization Summarization# Conceptual Guide Summarization involves creating a smaller summary of multiple longer documents. This can be useful for distilling long documents into the core pieces of information. The recommended way to get started using a summarization chain is: from langchain.chains.summarize import load_summarize_chain chain = load_summarize_chain(llm, chain_type="map_reduce") chain.run(docs) The following resources exist: Summarization Notebook: A notebook walking through how to accomplish this task. Additional related resources include: Utilities for working with Documents: Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents). previous Interacting with APIs next Extraction By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/use_cases/summarization.html
a5a373745abe-0
.md .pdf Chatbots Chatbots# Conceptual Guide Since language models are good at producing text, that makes them ideal for creating chatbots. Aside from the base prompts/LLMs, an important concept to know for Chatbots is memory. Most chat based applications rely on remembering what happened in previous interactions, which memory is designed to help with. The following resources exist: ChatGPT Clone: A notebook walking through how to recreate a ChatGPT-like experience with LangChain. Conversation Memory: A notebook walking through how to use different types of conversational memory. Conversation Agent: A notebook walking through how to create an agent optimized for conversation. Additional related resources include: Memory Key Concepts: Explanation of key concepts related to memory. Memory Examples: A collection of how-to examples for working with memory. More end-to-end examples include: Voice Assistant: A notebook walking through how to create a voice assistant using LangChain. previous Question Answering over Docs next Querying Tabular Data By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/use_cases/chatbots.html
ef0b05e60e5a-0
.md .pdf Agent Simulations Contents Simulations with One Agent Simulations with Two Agents Simulations with Multiple Agents Agent Simulations# Agent simulations involve interacting one of more agents with each other. Agent simulations generally involve two main components: Long Term Memory Simulation Environment Specific implementations of agent simulations (or parts of agent simulations) include: Simulations with One Agent# Simulated Environment: Gymnasium: an example of how to create a simple agent-environment interaction loop with Gymnasium (formerly OpenAI Gym). Simulations with Two Agents# CAMEL: an implementation of the CAMEL (Communicative Agents for “Mind” Exploration of Large Scale Language Model Society) paper, where two agents communicate with each other. Two Player D&D: an example of how to use a generic simulator for two agents to implement a variant of the popular Dungeons & Dragons role playing game. Simulations with Multiple Agents# Multi-Player D&D: an example of how to use a generic dialogue simulator for multiple dialogue agents with a custom speaker-ordering, illustrated with a variant of the popular Dungeons & Dragons role playing game. Decentralized Speaker Selection: an example of how to implement a multi-agent dialogue without a fixed schedule for who speaks when. Instead the agents decide for themselves who speaks by outputting bids to speak. This example shows how to do this in the context of a fictitious presidential debate. Authoritarian Speaker Selection: an example of how to implement a multi-agent dialogue, where a privileged agent directs who speaks what. This example also showcases how to enable the privileged agent to determine when the conversation terminates. This example shows how to do this in the context of a fictitious news show. Simulated Environment: PettingZoo: an example of how to create a agent-environment interaction loop for multiple agents with PettingZoo (a multi-agent version of Gymnasium).
https://python.langchain.com/en/latest/use_cases/agent_simulations.html
ef0b05e60e5a-1
Generative Agents: This notebook implements a generative agent based on the paper Generative Agents: Interactive Simulacra of Human Behavior by Park, et. al. previous Autonomous Agents next Question Answering over Docs Contents Simulations with One Agent Simulations with Two Agents Simulations with Multiple Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/use_cases/agent_simulations.html
08af939c0173-0
.md .pdf Personal Assistants (Agents) Personal Assistants (Agents)# Conceptual Guide We use “personal assistant” here in a very broad sense. Personal assistants have a few characteristics: They can interact with the outside world They have knowledge of your data They remember your interactions Really all of the functionality in LangChain is relevant for building a personal assistant. Highlighting specific parts: Agent Documentation (for interacting with the outside world) Index Documentation (for giving them knowledge of your data) Memory (for helping them remember interactions) Specific examples of this include: AI Plugins: an implementation of an agent that is designed to be able to use all AI Plugins. Plug-and-PlAI (Plugins Database): an implementation of an agent that is designed to be able to use all AI Plugins retrieved from PlugNPlAI. Wikibase Agent: an implementation of an agent that is designed to interact with Wikibase. Sales GPT: This notebook demonstrates an implementation of a Context-Aware AI Sales agent. previous Callbacks next Autonomous Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/use_cases/personal_assistants.html
abbd0d0b4a65-0
.md .pdf Autonomous Agents Contents Baby AGI (Original Repo) AutoGPT (Original Repo) MetaPrompt (Original Repo) Autonomous Agents# Autonomous Agents are agents that designed to be more long running. You give them one or multiple long term goals, and they independently execute towards those goals. The applications combine tool usage and long term memory. At the moment, Autonomous Agents are fairly experimental and based off of other open-source projects. By implementing these open source projects in LangChain primitives we can get the benefits of LangChain - easy switching and experimenting with multiple LLMs, usage of different vectorstores as memory, usage of LangChain’s collection of tools. Baby AGI (Original Repo)# Baby AGI: a notebook implementing BabyAGI as LLM Chains Baby AGI with Tools: building off the above notebook, this example substitutes in an agent with tools as the execution tools, allowing it to actually take actions. AutoGPT (Original Repo)# AutoGPT: a notebook implementing AutoGPT in LangChain primitives WebSearch Research Assistant: a notebook showing how to use AutoGPT plus specific tools to act as research assistant that can use the web. MetaPrompt (Original Repo)# Meta-Prompt: a notebook implementing Meta-Prompt in LangChain primitives previous Personal Assistants (Agents) next Agent Simulations Contents Baby AGI (Original Repo) AutoGPT (Original Repo) MetaPrompt (Original Repo) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/use_cases/autonomous_agents.html
6f471dc6f324-0
.md .pdf Question Answering over Docs Contents Document Question Answering Adding in sources Additional Related Resources End-to-end examples Question Answering over Docs# Conceptual Guide Question answering in this context refers to question answering over your document data. For question answering over other types of data, please see other sources documentation like SQL database Question Answering or Interacting with APIs. For question answering over many documents, you almost always want to create an index over the data. This can be used to smartly access the most relevant documents for a given question, allowing you to avoid having to pass all the documents to the LLM (saving you time and money). See this notebook for a more detailed introduction to this, but for a super quick start the steps involved are: Load Your Documents from langchain.document_loaders import TextLoader loader = TextLoader('../state_of_the_union.txt') See here for more information on how to get started with document loading. Create Your Index from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator().from_loaders([loader]) The best and most popular index by far at the moment is the VectorStore index. Query Your Index query = "What did the president say about Ketanji Brown Jackson" index.query(query) Alternatively, use query_with_sources to also get back the sources involved query = "What did the president say about Ketanji Brown Jackson" index.query_with_sources(query) Again, these high level interfaces obfuscate a lot of what is going on under the hood, so please see this notebook for a lower level walkthrough. Document Question Answering# Question answering involves fetching multiple documents, and then asking a question of them. The LLM response will contain the answer to your question, based on the content of the documents.
https://python.langchain.com/en/latest/use_cases/question_answering.html
6f471dc6f324-1
The LLM response will contain the answer to your question, based on the content of the documents. The recommended way to get started using a question answering chain is: from langchain.chains.question_answering import load_qa_chain chain = load_qa_chain(llm, chain_type="stuff") chain.run(input_documents=docs, question=query) The following resources exist: Question Answering Notebook: A notebook walking through how to accomplish this task. VectorDB Question Answering Notebook: A notebook walking through how to do question answering over a vector database. This can often be useful for when you have a LOT of documents, and you don’t want to pass them all to the LLM, but rather first want to do some semantic search over embeddings. Adding in sources# There is also a variant of this, where in addition to responding with the answer the language model will also cite its sources (eg which of the documents passed in it used). The recommended way to get started using a question answering with sources chain is: from langchain.chains.qa_with_sources import load_qa_with_sources_chain chain = load_qa_with_sources_chain(llm, chain_type="stuff") chain({"input_documents": docs, "question": query}, return_only_outputs=True) The following resources exist: QA With Sources Notebook: A notebook walking through how to accomplish this task. VectorDB QA With Sources Notebook: A notebook walking through how to do question answering with sources over a vector database. This can often be useful for when you have a LOT of documents, and you don’t want to pass them all to the LLM, but rather first want to do some semantic search over embeddings. Additional Related Resources# Additional related resources include:
https://python.langchain.com/en/latest/use_cases/question_answering.html
6f471dc6f324-2
Additional Related Resources# Additional related resources include: Utilities for working with Documents: Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents) and Embeddings & Vectorstores (useful for the above Vector DB example). CombineDocuments Chains: A conceptual overview of specific types of chains by which you can accomplish this task. End-to-end examples# For examples to this done in an end-to-end manner, please see the following resources: Semantic search over a group chat with Sources Notebook: A notebook that semantically searches over a group chat conversation. previous Agent Simulations next Chatbots Contents Document Question Answering Adding in sources Additional Related Resources End-to-end examples By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/use_cases/question_answering.html
b79bd32ce78a-0
.rst .pdf Evaluation Contents The Problem The Solution The Examples Other Examples Evaluation# Note Conceptual Guide This section of documentation covers how we approach and think about evaluation in LangChain. Both evaluation of internal chains/agents, but also how we would recommend people building on top of LangChain approach evaluation. The Problem# It can be really hard to evaluate LangChain chains and agents. There are two main reasons for this: # 1: Lack of data You generally don’t have a ton of data to evaluate your chains/agents over before starting a project. This is usually because Large Language Models (the core of most chains/agents) are terrific few-shot and zero shot learners, meaning you are almost always able to get started on a particular task (text-to-SQL, question answering, etc) without a large dataset of examples. This is in stark contrast to traditional machine learning where you had to first collect a bunch of datapoints before even getting started using a model. # 2: Lack of metrics Most chains/agents are performing tasks for which there are not very good metrics to evaluate performance. For example, one of the most common use cases is generating text of some form. Evaluating generated text is much more complicated than evaluating a classification prediction, or a numeric prediction. The Solution# LangChain attempts to tackle both of those issues. What we have so far are initial passes at solutions - we do not think we have a perfect solution. So we very much welcome feedback, contributions, integrations, and thoughts on this. Here is what we have for each problem so far: # 1: Lack of data We have started LangChainDatasets a Community space on Hugging Face. We intend this to be a collection of open source datasets for evaluating common chains and agents.
https://python.langchain.com/en/latest/use_cases/evaluation.html
b79bd32ce78a-1
We intend this to be a collection of open source datasets for evaluating common chains and agents. We have contributed five datasets of our own to start, but we highly intend this to be a community effort. In order to contribute a dataset, you simply need to join the community and then you will be able to upload datasets. We’re also aiming to make it as easy as possible for people to create their own datasets. As a first pass at this, we’ve added a QAGenerationChain, which given a document comes up with question-answer pairs that can be used to evaluate question-answering tasks over that document down the line. See this notebook for an example of how to use this chain. # 2: Lack of metrics We have two solutions to the lack of metrics. The first solution is to use no metrics, and rather just rely on looking at results by eye to get a sense for how the chain/agent is performing. To assist in this, we have developed (and will continue to develop) tracing, a UI-based visualizer of your chain and agent runs. The second solution we recommend is to use Language Models themselves to evaluate outputs. For this we have a few different chains and prompts aimed at tackling this issue. The Examples# We have created a bunch of examples combining the above two solutions to show how we internally evaluate chains and agents when we are developing. In addition to the examples we’ve curated, we also highly welcome contributions here. To facilitate that, we’ve included a template notebook for community members to use to build their own examples. The existing examples we have are: Question Answering (State of Union): A notebook showing evaluation of a question-answering task over a State-of-the-Union address. Question Answering (Paul Graham Essay): A notebook showing evaluation of a question-answering task over a Paul Graham essay.
https://python.langchain.com/en/latest/use_cases/evaluation.html
b79bd32ce78a-2
SQL Question Answering (Chinook): A notebook showing evaluation of a question-answering task over a SQL database (the Chinook database). Agent Vectorstore: A notebook showing evaluation of an agent doing question answering while routing between two different vector databases. Agent Search + Calculator: A notebook showing evaluation of an agent doing question answering using a Search engine and a Calculator as tools. Evaluating an OpenAPI Chain: A notebook showing evaluation of an OpenAPI chain, including how to generate test data if you don’t have any. Other Examples# In addition, we also have some more generic resources for evaluation. Question Answering: An overview of LLMs aimed at evaluating question answering systems in general. Data Augmented Question Answering: An end-to-end example of evaluating a question answering system focused on a specific document (a RetrievalQAChain to be precise). This example highlights how to use LLMs to come up with question/answer examples to evaluate over, and then highlights how to use LLMs to evaluate performance on those generated examples. Hugging Face Datasets: Covers an example of loading and using a dataset from Hugging Face for evaluation. previous Extraction next Agent Benchmarking: Search + Calculator Contents The Problem The Solution The Examples Other Examples By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https://python.langchain.com/en/latest/use_cases/evaluation.html
6b873f2e777b-0
.ipynb .pdf AutoGPT Contents Set up tools Set up memory Setup model and AutoGPT Run an example AutoGPT# Implementation of https://github.com/Significant-Gravitas/Auto-GPT but with LangChain primitives (LLMs, PromptTemplates, VectorStores, Embeddings, Tools) Set up tools# We’ll set up an AutoGPT with a search tool, and write-file tool, and a read-file tool from langchain.utilities import SerpAPIWrapper from langchain.agents import Tool from langchain.tools.file_management.write import WriteFileTool from langchain.tools.file_management.read import ReadFileTool search = SerpAPIWrapper() tools = [ Tool( name = "search", func=search.run, description="useful for when you need to answer questions about current events. You should ask targeted questions" ), WriteFileTool(), ReadFileTool(), ] Set up memory# The memory here is used for the agents intermediate steps from langchain.vectorstores import FAISS from langchain.docstore import InMemoryDocstore from langchain.embeddings import OpenAIEmbeddings # Define your embedding model embeddings_model = OpenAIEmbeddings() # Initialize the vectorstore as empty import faiss embedding_size = 1536 index = faiss.IndexFlatL2(embedding_size) vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {}) Setup model and AutoGPT# Initialize everything! We will use ChatOpenAI model from langchain.experimental import AutoGPT from langchain.chat_models import ChatOpenAI agent = AutoGPT.from_llm_and_tools( ai_name="Tom", ai_role="Assistant", tools=tools,
https://python.langchain.com/en/latest/use_cases/autonomous_agents/autogpt.html
6b873f2e777b-1
ai_name="Tom", ai_role="Assistant", tools=tools, llm=ChatOpenAI(temperature=0), memory=vectorstore.as_retriever() ) # Set verbose to be true agent.chain.verbose = True Run an example# Here we will make it write a weather report for SF agent.run(["write a weather report for SF today"]) > Entering new LLMChain chain... Prompt after formatting: System: You are Tom, Assistant Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications. If you have completed all your tasks, make sure to use the "finish" command. GOALS: 1. write a weather report for SF today Constraints: 1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files. 2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember. 3. No user assistance 4. Exclusively use the commands listed in double quotes e.g. "command name" Commands: 1. search: useful for when you need to answer questions about current events. You should ask targeted questions, args json schema: {"query": {"title": "Query", "type": "string"}} 2. write_file: Write file to disk, args json schema: {"file_path": {"title": "File Path", "description": "name of file", "type": "string"}, "text": {"title": "Text", "description": "text to write to file", "type": "string"}}
https://python.langchain.com/en/latest/use_cases/autonomous_agents/autogpt.html
6b873f2e777b-2
3. read_file: Read file from disk, args json schema: {"file_path": {"title": "File Path", "description": "name of file", "type": "string"}} 4. finish: use this to signal that you have finished all your objectives, args: "response": "final response to let people know you have finished your objectives" Resources: 1. Internet access for searches and information gathering. 2. Long Term memory management. 3. GPT-3.5 powered Agents for delegation of simple tasks. 4. File output. Performance Evaluation: 1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities. 2. Constructively self-criticize your big-picture behavior constantly. 3. Reflect on past decisions and strategies to refine your approach. 4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps. You should only respond in JSON format as described below Response Format: { "thoughts": { "text": "thought", "reasoning": "reasoning", "plan": "- short bulleted\n- list that conveys\n- long-term plan", "criticism": "constructive self-criticism", "speak": "thoughts summary to say to user" }, "command": { "name": "command name", "args": { "arg name": "value" } } } Ensure the response can be parsed by Python json.loads System: The current time and date is Tue Apr 18 21:31:28 2023 System: This reminds you of these events from your past: []
https://python.langchain.com/en/latest/use_cases/autonomous_agents/autogpt.html
6b873f2e777b-3
System: This reminds you of these events from your past: [] Human: Determine which next command to use, and respond using the format specified above: > Finished chain. { "thoughts": { "text": "I will start by writing a weather report for San Francisco today. I will use the 'search' command to find the current weather conditions.", "reasoning": "I need to gather information about the current weather conditions in San Francisco to write an accurate weather report.", "plan": "- Use the 'search' command to find the current weather conditions in San Francisco\n- Write a weather report based on the information gathered", "criticism": "I need to make sure that the information I gather is accurate and up-to-date.", "speak": "I will use the 'search' command to find the current weather conditions in San Francisco." }, "command": { "name": "search", "args": { "query": "what is the current weather in san francisco" } } } > Entering new LLMChain chain... Prompt after formatting: System: You are Tom, Assistant Your decisions must always be made independently without seeking user assistance. Play to your strengths as an LLM and pursue simple strategies with no legal complications. If you have completed all your tasks, make sure to use the "finish" command. GOALS: 1. write a weather report for SF today Constraints: 1. ~4000 word limit for short term memory. Your short term memory is short, so immediately save important information to files. 2. If you are unsure how you previously did something or want to recall past events, thinking about similar events will help you remember. 3. No user assistance
https://python.langchain.com/en/latest/use_cases/autonomous_agents/autogpt.html
6b873f2e777b-4
3. No user assistance 4. Exclusively use the commands listed in double quotes e.g. "command name" Commands: 1. search: useful for when you need to answer questions about current events. You should ask targeted questions, args json schema: {"query": {"title": "Query", "type": "string"}} 2. write_file: Write file to disk, args json schema: {"file_path": {"title": "File Path", "description": "name of file", "type": "string"}, "text": {"title": "Text", "description": "text to write to file", "type": "string"}} 3. read_file: Read file from disk, args json schema: {"file_path": {"title": "File Path", "description": "name of file", "type": "string"}} 4. finish: use this to signal that you have finished all your objectives, args: "response": "final response to let people know you have finished your objectives" Resources: 1. Internet access for searches and information gathering. 2. Long Term memory management. 3. GPT-3.5 powered Agents for delegation of simple tasks. 4. File output. Performance Evaluation: 1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities. 2. Constructively self-criticize your big-picture behavior constantly. 3. Reflect on past decisions and strategies to refine your approach. 4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps. You should only respond in JSON format as described below Response Format: { "thoughts": { "text": "thought", "reasoning": "reasoning",
https://python.langchain.com/en/latest/use_cases/autonomous_agents/autogpt.html