Unnamed: 0,title,content,source,url 0,Multimodal Financial Document Analysis and Recall;,"# Multimodal Financial Document Analysis and Recall; Tesla Investor Presentations In this lesson, we will explore the application of the Retrieval-augmented Generation (RAG) method in processing a company's financial information contained within a PDF document. The process includes extracting critical data from a PDF file (like text, tables, graphs, etc.) and saving them in a vector store database such as Deep Lake for quick and efficient retrieval. Next, a RAG-enabled bot can access stored information to respond to end-user queries. This task requires diverse tools, including [Unstructured.io](http://unstructures.io/) for text/table extraction, OpenAI's GPT-4V for extracting information from graphs, and LlamaIndex for developing a bot with retrieval capabilities. As previously mentioned, data preprocessing plays a significant role in the RAG process. So, we start by pulling data from a PDF document. The content of this lesson focuses on demonstrating how to extract data from a single PDF document for ease of understanding. Nevertheless, the accompanying notebook provided after the lesson will analyze three separate reports, offering a broader scope of information.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320368-building-a-multi-modal-financial-document-analysis-and-recall-for-tesla-investor-presentations 1,Multimodal Financial Document Analysis and Recall;,"# Extracting Data Extracting textual data is relatively straightforward, but processing graphical elements such as line or bar charts can be more challenging. The latest OpenAI model equipped with vision processing, GPT-4V, is valuable for visual elements. We can feed the slides to the model and ask it to describe it in detail, which then will be used to complement the textual information. This lesson uses Tesla's [Q3 financial report](https://digitalassets.tesla.com/tesla-contents/image/upload/IR/TSLA-Q3-2023-Update-3.pdf) as the source document. It is possible to download the document using the `wget` command. ```bash wget https://digitalassets.tesla.com/tesla-contents/image/upload/IR/TSLA-Q3-2023-Update-3.pdf ``` ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320368-building-a-multi-modal-financial-document-analysis-and-recall-for-tesla-investor-presentations 2,Multimodal Financial Document Analysis and Recall;,"# Extracting Data ## 1. Text/Tables The `unstructured` package is an effective tool for extracting information from PDF files. It requires two tools, `poppler`and `tesseract`, that help render PDF documents. We suggest setting up these packages on [Google Colab](https://colab.research.google.com/), freely available for students to execute and experiment with code. We will briefly mention the installation of these packages on other operating systems. Let's install the utilities and their dependencies using the following commands. ```bash apt-get -qq install poppler-utils apt-get -qq install tesseract-ocr pip install -q unstructured[all-docs]==0.11.0 fastapi==0.103.2 kaleido==0.2.1 uvicorn==0.24.0.post1 typing-extensions==4.5.0 pydantic==1.10.13 ``` The process is simple after installing all the necessary packages and dependencies. We simply use the `partition_pdf` function, which extracts text and table data from the PDF and divides it into multiple chunks. We can customize the size of these chunks based on the number of characters. ```python from unstructured.partition.pdf import partition_pdf raw_pdf_elements = partition_pdf( filename=""./TSLA-Q3-2023-Update-3.pdf"", # Use layout model (YOLOX) to get bounding boxes (for tables) and find titles # Titles are any sub-section of the document infer_table_structure=True, # Post processing to aggregate text once we have the title chunking_strategy=""by_title"", # Chunking params to aggregate text blocks # Attempt to create a new chunk 3800 chars # Attempt to keep chunks > 2000 chars # Hard max on chunks max_characters=4000, new_after_n_chars=3800, combine_text_under_n_chars=2000 ) ``` The previous code identifies and extracts various elements from the PDF, which can be classified into CompositeElements (the textual content) and Tables. We use the `[Pydantic](https://docs.pydantic.dev/latest/)` package to create a new data structure that stores information about each element, including their `type` and `text`. The code below iterates through all extracted elements, keeping them in a list where each item is an instance of the `Element` type. ```python from pydantic import BaseModel from typing import Any # Define data structure class Element(BaseModel): type: str text: Any # Categorize by type categorized_elements = [] for element in raw_pdf_elements: if ""unstructured.documents.elements.Table"" in str(type(element)): categorized_elements.append(Element(type=""table"", text=str(element))) elif ""unstructured.documents.elements.CompositeElement"" in str(type(element)): categorized_elements.append(Element(type=""text"", text=str(element))) ``` Creating the `Element` data structure enables convenient storage of the additional information, which can be beneficial for identifying the source of each answer, whether it is derived from texts, tables, or figures.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320368-building-a-multi-modal-financial-document-analysis-and-recall-for-tesla-investor-presentations 3,Multimodal Financial Document Analysis and Recall;,"# Extracting Data ## 2. Graphs The next step is gathering information from the charts to add context. The primary challenge is extracting images from the pages to feed into OpenAI's endpoint. A practical approach is to convert the PDF to images and pass each page to the model, inquiring if it detects any graphs. If it identifies one or more charts, the model can describe the data and the trends they represent. If no graphs are detected, the model will return an empty array as an indication. The initial step involves installing the `pdf2image` package to convert the PDF into images. This also requires the `poppler` tool, which we have already installed. ```bash !pip install -q pdf2image==1.16.3 ``` The code below uses the `convert_from_path` function, which takes the path of a PDF file. We can iterate over each page and save it as a PNG file using the `.save()` method. These images will be saved in the `./pages` directory. Additionally, we define the `pages_png` variable that holds the path of each image. ```python import os from pdf2image import convert_from_path os.mkdir(""./pages"") convertor = convert_from_path('./TSLA-Q3-2023-Update-3.pdf') for idx, image in enumerate( convertor ): image.save(f""./pages/page-{idx}.png"") pages_png = [file for file in os.listdir(""./pages"") if file.endswith('.png')] ``` Defining a few helper functions and variables is necessary before sending the image files to the OpenAI API. The `headers` variable will contain the OpenAI API Key, enabling the server to authenticate our requests. The `payload` carries configurations such as the model name, the maximum token limit, and the prompts. It instructs the model to describe the graphs and generate responses in JSON format, addressing scenarios like encountering multiple graphs on a single page or finding no graphs at all. We will add the images to the `payload` before sending the requests. Finally, there is the `encode_image()` function, which encodes the images in base64 format, allowing them to be processed by OpenAI. ```python headers = { ""Content-Type"": ""application/json"", ""Authorization"": ""Bearer "" + str( os.environ[""OPENAI_API_KEY""] ) } payload = { ""model"": ""gpt-4-vision-preview"", ""messages"": [ { ""role"": ""user"", ""content"": [ { ""type"": ""text"", ""text"": ""You are an assistant that find charts, graphs, or diagrams from an image and summarize their information. There could be multiple diagrams in one image, so explain each one of them separately. ignore tables."" }, { ""type"": ""text"", ""text"": 'The response must be a JSON in following format {""graphs"": [, , ]} where , , and placeholders that describe each graph found in the image. Do not append or add anything other than the JSON format response.' }, { ""type"": ""text"", ""text"": 'If could not find a graph in the image, return an empty list JSON as follows: {""graphs"": []}. Do not append or add anything other than the JSON format response. Dont use coding ""```"" marks or the word json.' }, { ""type"": ""text"", ""text"": ""Look at the attached image and describe all the graphs inside it in JSON format. ignore tables and be concise."" } ] } ], ""max_tokens"": 1000 } # Function to encode the image to base64 format def encode_image(image_path): with open(image_path, ""rb"") as image_file: return base64.b64encode(image_file.read()).decode('utf-8') ``` The remaining steps include: 1) utilizing the `pages_png` variable to loop through the images, 2) encoding the image into base64 format, 3) adding the image into the payload, and finally, 4) sending",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320368-building-a-multi-modal-financial-document-analysis-and-recall-for-tesla-investor-presentations 4,Multimodal Financial Document Analysis and Recall;,"# Extracting Data ## 2. Graphs the request to OpenAI and handling its responses. We will use the same `Element` data structure to store each image's type (graph) and the text (descriptions of the graphs). ```python graphs_description = [] for idx, page in tqdm( enumerate( pages_png ) ): # Getting the base64 string base64_image = encode_image(f""./pages/{page}"") # Adjust Payload tmp_payload = copy.deepcopy(payload) tmp_payload['messages'][0]['content'].append({ ""type"": ""image_url"", ""image_url"": { ""url"": f ""data:image/png;base64,{base64_image}"" } }) try: response = requests.post(""https://api.openai.com/v1/chat/completions"", headers=headers, json=tmp_payload) response = response.json() graph_data = json.loads( response['choices'][0]['message']['content'] )['graphs'] desc = [f""{page}\n"" + '\n'.join(f""{key}: {item[key]}"" for key in item.keys()) for item in graph_data] graphs_description.extend( desc ) except: # Skip the page if there is an error. print(""skipping... error in decoding."") continue; graphs_description = [Element(type=""graph"", text=str(item)) for item in graphs_description] ```",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320368-building-a-multi-modal-financial-document-analysis-and-recall-for-tesla-investor-presentations 5,Multimodal Financial Document Analysis and Recall;,"# Store on Deep Lake This section will utilize the Deep Lake vector database to store the collected information and their embeddings. These embedding vectors convert pieces of text into numerical representations that capture their meaning, enabling similarity metrics such as cosine similarity to identify documents with close relationships. For instance, a prompt inquiring about a company's total revenue would result in high cosine similarity with a database document stating the revenue amount as X dollars. The data preparation is complete with the extraction of all crucial information from the PDF. The next step involves combining the output from the previous sections, resulting in a list containing 41 entries. ```python all_docs = categorized_elements + graphs_description print( len( all_docs ) ) ``` ``` 41 ``` Given that we are using LlamaIndex, we can use its integration with Deep Lake to create and store the dataset. Begin by installing LlamaIndex and deeplake packages along with their dependencies. ```bash !pip install -q llama_index==0.9.8 deeplake==3.8.8 cohere==4.37 ``` Before using the libraries, it's essential to configure the `OPENAI_API_KEY` and `ACTIVELOOP_TOKEN` variables in the environment. Remember to substitute the placeholder values with your actual keys from the respective platforms. ```python import os os.environ[""OPENAI_API_KEY""] = """" os.environ[""ACTIVELOOP_TOKEN""] = """" ``` The integration of LlamaIndex enables the use of `DeepLakeVectorStore` class, which is designed to create a new dataset. Simply enter your organization ID, which by default is your Activeloop username, in the code provided below. This code will generate an empty dataset, ready to store documents. ```python from llama_index.vector_stores import DeepLakeVectorStore # TODO: use your organization id here. (by default, org id is your username) my_activeloop_org_id = """" my_activeloop_dataset_name = ""tsla_q3"" dataset_path = f""hub://{my_activeloop_org_id}/{my_activeloop_dataset_name}"" vector_store = DeepLakeVectorStore( dataset_path=dataset_path, runtime={""tensor_db"": True}, overwrite=False) ``` ``` Your Deep Lake dataset has been successfully created! ``` Next, we must pass the created vector store to a `StorageContext` class. This class serves as a wrapper to create storage from various data types. In our case, we're generating the storage from a vector database, which is accomplished simply by passing the created database instance using the `.from_defaults()` method. ```python from llama_index.storage.storage_context import StorageContext storage_context = StorageContext.from_defaults(vector_store=vector_store) ``` To store our preprocessed data, we must transform them into LlamaIndex `Documents` for compatibility with the library. The LlamaIndex `Document` is an abstract class that acts as a wrapper for various data types, including text files, PDFs, and database outputs. This wrapper facilitates the storage of valuable information with each sample. In our case, we can include a metadata tag to hold extra details like the data type (text, table, or graph) or denote document relationships. This approach simplifies the retrieval of these details later. As shown in the code below, you can employ built-in classes like `SimpleDirectoryReader` to automatically read files from a specified path or proceed manually. It will loop through our list containing all the extracted information and assign text and a category to each document. ```python from llama_index import Document documents = [Document(text=t.text, metadata={""category"": t.type},) for t in categorized_elements] ``` Lastly, we can utilize the `VectorStoreIndex` class to generate embeddings for the documents and employ the database instance to store these values. By default, it uses OpenAI's Ada model to create the embeddings. ```python from llama_index import VectorStoreIndex index = VectorStoreIndex.from_documents( documents, storage_context=storage_context ) ``` ``` Uploading data to deeplake dataset. 100%|██████████| 29/29 [00:00<00:00, 46.26it/s] \Dataset(path='hub://alafalaki/tsla_q3-nograph', tensors=['text', 'metadata', 'embedding', 'id']) tensor htype shape dtype compression ------- ------- ------- ------- ------- text text (29, 1) str None metadata json (29, 1) str None embedding embedding (29, 1536) float32 None id text (29, 1) str None ``` ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320368-building-a-multi-modal-financial-document-analysis-and-recall-for-tesla-investor-presentations 7,Multimodal Financial Document Analysis and Recall;,"# Activate DeepMemory The deep memory feature from Activeloop enhances the retriever's accuracy. This improvement allows the model to access higher-quality data, leading to more detailed and informative responses. In earlier lessons, we already covered the basics of deep memory, so we will not dive into more details. The process begins by fetching chunks of data from the cloud and using GPT-3.5 to create specific questions for each chunk. These generated questions are then utilized in the deep memory training procedure to enhance the embedding quality. In our experience, this approach led to a 25% enhancement in performance. The initial phase involves loading the pre-existing dataset and reading the text of each chunk along with its corresponding ID. ```python from llama_index.vector_stores import DeepLakeVectorStore # TODO: use your organization id here. (by default, org id is your username) my_activeloop_org_id = """" my_activeloop_dataset_name = ""LlamaIndex_tsla_q3"" dataset_path = f""hub://{my_activeloop_org_id}/{my_activeloop_dataset_name}"" db = DeepLakeVectorStore( dataset_path=dataset_path, runtime={""tensor_db"": True}, read_only=True ) # fetch dataset docs and ids if they exist (optional you can also ingest) docs = db.vectorstore.dataset.text.data(fetch_chunks=True, aslist=True)['value'] ids = db.vectorstore.dataset.id.data(fetch_chunks=True, aslist=True)['value'] print(len(docs)) ``` ``` Deep Lake Dataset in hub://genai360/tesla_quarterly_2023 already exists, loading from the storage 127 ``` The following code segment outlines a function designed to use GPT-3.5 for generating questions corresponding to each data chunk. This involves crafting a specialized tool tailored for the OpenAI API. Primarily, the code configures suitable prompts for API requests to produce the questions and compiles them with their associated chunk IDs into a list. ```python import json import random from tqdm import tqdm from openai import OpenAI client = OpenAI() # Set the function JSON Schema for openai function calling feature tools = [ { ""type"": ""function"", ""function"": { ""name"": ""create_question_from_text"", ""parameters"": { ""type"": ""object"", ""properties"": { ""question"": { ""type"": ""string"", ""description"": ""Question created from the given text"", }, }, ""required"": [""question""], }, ""description"": ""Create question from a given text."", }, } ] def generate_question(tools, text): try: response = client.chat.completions.create( model=""gpt-3.5-turbo"", tools=tools, tool_choice={ ""type"": ""function"", ""function"": {""name"": ""create_question_from_text""}, }, messages=[ {""role"": ""system"", ""content"": ""You are a world class expert for generating questions based on provided context. You make sure the question can be answered by the text.""}, { ""role"": ""user"", ""content"": text, }, ], ) json_response = response.choices[0].message.tool_calls[0].function.arguments parsed_response = json.loads(json_response) question_string = parsed_response[""question""] return question_string except: question_string = ""No question generated"" return question_string def generate_queries(docs: list[str], ids: list[str], n: int): questions = [] relevances = [] pbar = tqdm(total=n) while len(questions) < n: # 1. randomly draw a piece of text and relevance id r = random.randint(0, len(docs)-1) text, label = docs[r], ids[r] # 2. generate queries and assign and relevance id generated_qs = [generate_question(tools, text)] if generated_qs == [""No question generated""]: continue questions.extend(generated_qs) relevances.extend([[(label, 1)] for _ in generated_qs]) pbar.update(len(generated_qs)) return questions[:n], relevances[:n] questions, relevances = generate_queries(docs, ids, n=20) ``` ``` 100%|██████████| 20/20 [00:19<00:00, 1.02it/s] ``` Now, we can use the questions and the reference ids to activate the deep memory using the `.deep_memory.train()` method to improve the embedding representations. You can see the status of the training process using the `.info` method. ```python from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() job_id = db.vectorstore.deep_memory.train( queries=questions, relevance=relevances, embedding_function=embeddings.embed_documents, ) print( db.vectorstore.dataset.embedding.info )",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320368-building-a-multi-modal-financial-document-analysis-and-recall-for-tesla-investor-presentations 8,Multimodal Financial Document Analysis and Recall;,"# Activate DeepMemory ``` ``` Starting DeepMemory training job Your Deep Lake dataset has been successfully created! Preparing training data for deepmemory: Creating 20 embeddings in 1 batches of size 20:: 100%|██████████| 1/1 [00:03<00:00, 3.23s/it] DeepMemory training job started. Job ID: 6581e3056a1162b64061a9a4 {'deepmemory': {'6581e3056a1162b64061a9a4_0.npy': {'base_recall@10': 0.25, 'deep_memory_version': '0.2', 'delta': 0.25, 'job_id': '6581e3056a1162b64061a9a4_0', 'model_type': 'npy', 'recall@10': 0.5}, 'model.npy': {'base_recall@10': 0.25, 'deep_memory_version': '0.2', 'delta': 0.25, 'job_id': '6581e3056a1162b64061a9a4_0', 'model_type': 'npy', 'recall@10': 0.5}}} ``` The dataset is now prepared and compatible with the deep memory feature. It's crucial to note that the deep memory option must be actively set to true when using the dataset for inference.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320368-building-a-multi-modal-financial-document-analysis-and-recall-for-tesla-investor-presentations 9,Multimodal Financial Document Analysis and Recall;,"# Chatbot In Action In this section, we will use the created dataset as the retrieval object, providing the necessary context for the `GPT-3.5-turbo` model (the default choice for LlamaIndex) to answer the questions. Keep in mind that the inference outcomes presented in the subsequent section are derived from processing three PDF files, which are consistent with the sample codes provided in the notebook. To access the processed dataset containing all the PDF documents, use `hub://genai360/tesla_quarterly_2023` as the dataset path in the code below. The `DeepLakeVectorStore` class also handles loading a dataset from the hub. The key distinction in the code below, compared to the previous sections, lies in the use of the `.from_vector_store()` method. This method creates indexes directly from the database rather than variables. ```python from llama_index.vector_stores import DeepLakeVectorStore from llama_index.storage.storage_context import StorageContext from llama_index import VectorStoreIndex vector_store = DeepLakeVectorStore(dataset_path=dataset_path, overwrite=False) storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_vector_store( vector_store, storage_context=storage_context ) ``` We can now use the `.as_query_engine()` method of the index variables to establish a query engine. This will allow us to ask questions from various data sources. Notice the `vector_store_kwargs` argument, which activates the `deep_memory` feature by setting it to True. This step is essential for enabling the feature on the retriever. The `.query()` method takes a prompt and searches for the most relevant data points within the database to construct an answer. ```python query_engine = index.as_query_engine(vector_store_kwargs={""deep_memory"": True}) response = query_engine.query( ""What are the trends in vehicle deliveries?"", ) ``` ``` The trends in vehicle deliveries on the Quarter 3 report show an increasing trend over the quarters. ``` ![Screenshot referenced graph.](Multimodal%20Financial%20Document%20Analysis%20and%20Recall;%20974bbe2cce5d4402a7ac0bec9022a7f3/Untitled.png) Screenshot referenced graph. As observed, the chatbot effectively utilized the data from the descriptions of the graphs we generated in the report. On the right, there's a screenshot of the bar chart which the chatbot referenced to generate its response. Additionally, we conducted an experiment where we compiled the same dataset but excluded the graph descriptions. This dataset can be accessed via `hub://genai360/tesla_quarterly_2023-nograph` path. The purpose was to determine whether including the descriptions aids the chatbot's performance. ``` In quarter 3, there was a decrease in Model S/X deliveries compared to the previous quarter, with a 14% decline. However, there was an increase in Model 3/Y deliveries, with a 29% growth. Overall, total deliveries in quarter 3 increased by 27% compared to the previous quarter. ``` You'll observe that the chatbot points to incorrect text segments. Despite the answer being contextually similar, it doesn't provide the correct answer. The graph shows an upward trend, a detail that might not have been mentioned in the report's text.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320368-building-a-multi-modal-financial-document-analysis-and-recall-for-tesla-investor-presentations 10,Multimodal Financial Document Analysis and Recall;,"# Conclusion In this lesson, we explored the steps of developing a chatbot capable of utilizing PDF files as a knowledge base to answer questions. Additionally, we employed the vision capability of GPT-4V to identify and describe graphs from each page. Describing the charts and their illustrated trends improves the chatbot's accuracy in answering and providing additional context. --- >> [Notebook](https://colab.research.google.com/drive/1JHevaKUazdjSptTMzjFR9BCQ2RTBq73o?usp=sharing). >> Preprocessed Text/Label: [categorized_elements.pkl](Multimodal%20Financial%20Document%20Analysis%20and%20Recall;%20974bbe2cce5d4402a7ac0bec9022a7f3/categorized_elements.pkl) >> Preprocessed Graphs: [graphs_description.pkl](Multimodal%20Financial%20Document%20Analysis%20and%20Recall;%20974bbe2cce5d4402a7ac0bec9022a7f3/graphs_description.pkl)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320368-building-a-multi-modal-financial-document-analysis-and-recall-for-tesla-investor-presentations 11,Production-Ready RAG Solutions with LlamaIndex,"# Introduction [LlamaIndex](https://www.llamaindex.ai/) is a framework for developing data-driven LLM applications, offering data ingestion, indexing, and querying tools. It plays a key role in incorporating additional data sources into LLMs, which is essential for RAG systems. In this lesson, we will explore how RAG-based applications can be improved by focusing on building production-ready code with a focus on data considerations. We'll discuss how to improve RAG retrieval performance through clear data definition and state management. Additionally, we will cover how to use LLMs to extract metadata to boost retrieval efficiency. The lesson also covers the concerns about how embedding references and summaries in text chunks can significantly improve retrieval performance and the capability of LLMs to infer metadata filters for structured retrieval. We'll also discuss fine-tuning embedding representations in LLM applications to achieve optimal retrieval performance.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320355-production-ready-rag-solutions-with-llamaindex 12,Production-Ready RAG Solutions with LlamaIndex,"# Challenges of RAG Systems Retrieval-Augmented Generation (RAG) applications present unique challenges crucial for their successful implementation. In this section, we explore the dynamic management of data, ensuring varied and effective data representation and adhering to regulatory standards, highlighting the intricate balance required in RAG systems. ### **Document Updates and Stored Vectors** A significant challenge in RAG systems is keeping up with changes in documents and ensuring these updates are accurately reflected in the stored vectors. When documents are modified, added, or removed, the corresponding vectors need to be updated to maintain the accuracy and relevance of the retrieval system. Not addressing this can lead to outdated or irrelevant data retrieval, negatively impacting the system's effectiveness. Implementing dynamic updating mechanisms for vectors can greatly improve the system's ability to provide relevant and current information, enhancing its overall performance. ### Chunking **and Data Distribution** The granularity level is vital in achieving accurate retrieval results. If the chunk size is too large, important details might be missed; if it's too small, the system might get bogged down in details and miss the bigger picture. This setting requires testing and refinement tailored to the specific characteristics of the data and its application. ### **Diverse Representations in Latent Space** The presence of different representations in the same latent space can be challenging (e.g., for representing a paragraph of text versus representing a table or an image). These diverse representations can cause conflicts or inconsistencies when retrieving information, leading to less accurate results. ### **Compliance** Compliance is another critical issue, especially when implementing RAG systems in regulated industries or environments with strict data handling requirements, particularly for private documents with limited access. Non-compliance can lead to legal issues (think about a finance application), data breaches, or misuse of sensitive information. Ensuring the system adheres to relevant laws, regulations, and ethical guidelines prevents these risks. It increases the system's reliability and trustworthiness, vital for its successful deployment.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320355-production-ready-rag-solutions-with-llamaindex 13,Production-Ready RAG Solutions with LlamaIndex,"# Optimization Understanding the intricacies of challenges in RAG systems and their solutions is crucial for boosting their overall effectiveness. We will explore several optimization strategies that can contribute to performance enhancement. ### **Model Selection and Hybrid Retrieval** Selecting appropriate models for the embedding and generation phases is critical. Choosing efficient and cheap embedding models can minimize costs while maintaining performance levels, but not in the generation process where an LLM is needed. Different options are available for both phases, including proprietary models with API access, such as [OpenAI](https://openai.com/blog/openai-api) or [Cohere](https://docs.cohere.com/docs), as well as open-source alternatives like [LLaMA-2](https://ai.meta.com/llama/) and [Mistral](https://mistral.ai/), which offer the flexibility of self-hosting or using third-party APIs. This choice should be based on the unique needs and resources of the application. It’s worth noting that, in some retrieval systems, balancing latency with quality is essential. Combining different methods, like keyword and embedding retrieval with reranking, ensures that the system is fast enough to meet user expectations while still providing accurate results. LlamaIndex also offers extensive integration options with various platforms, allowing for easy selection and comparison between different providers. This facilitates finding the optimal balance between cost and performance for specific needs. ### **CPU-Based Inference** In production, relying on GPU-based inference can incur substantial costs. Investigating options like better hardware or refining the inference code can lower the costs in large-scale applications where small inefficiencies can accumulate into considerable expenses. This approach is particularly important when using open-source models from sources such as the [HuggingFace hub](https://huggingface.co/docs/hub/index). Intel®'s advanced optimization technologies help with the efficient fine-tuning and inference of neural network models on CPUs. The 4th Gen Intel® Xeon® Scalable processors come with Intel® Advanced Matrix Extensions (Intel® AMX), an AI-enhanced acceleration feature. Each core of these processors includes integrated BF16 and INT8 accelerators, contributing to the acceleration of deep learning fine-tuning and inference speed. Additionally, libraries such as [Intel Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch) and [Intel® Extension for Transformers](https://github.com/intel/intel-extension-for-transformers) further optimize the performance of neural network models demanding computations on CPUs. ### Retrieval **Performance** In RAG applications, the primary method involves dividing the data into smaller, independent units and housing them within a vector dataset. However, this often leads to failures during document retrieval, as individual segments may lack the broader context necessary to answer specific queries. LlamaIndex offers features designed to construct a network of interlinked chunks (nodes), along with retrieval tools. These tools improve search capabilities by augmenting user queries, extracting key terms, or navigating through the connected nodes to locate the necessary information for answering queries. Advanced data management tools can help organize, index, and retrieve data more effectively. New tooling can also assist in handling large volumes of data and complex queries, which are common in RAG systems.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320355-production-ready-rag-solutions-with-llamaindex 14,Production-Ready RAG Solutions with LlamaIndex,"# Optimization ## The Role of the Retrieval Step While the role of the retrieval step is frequently underestimated, it is vital for the effectiveness of the RAG pipeline. The techniques employed in this phase significantly influence the relevance and contextuality of the output. The LlamaIndex framework provides a variety of retrieval methods, complete with practical examples for different use cases, including the following examples, to name a few. - Combining keyword + embedding search in a hybrid approach can enhance retrieval of specific queries. [[link](https://docs.llamaindex.ai/en/stable/examples/query_engine/CustomRetrievers.html)] - Metadata filtering can provide additional context and improve the performance of the RAG pipeline. [[link](https://docs.llamaindex.ai/en/stable/examples/vector_stores/WeaviateIndexDemo.html#metadata-filtering)] - Re-ranking orders the search results by considering the recency of data to the user’s input query. [[link](https://docs.llamaindex.ai/en/stable/examples/node_postprocessor/CohereRerank.html)] - Indexing documents by summaries and retrieving relevant information within the document. [[link](https://docs.llamaindex.ai/en/stable/examples/index_structs/doc_summary/DocSummary.html)] Additionally, augmenting chunks with metadata will provide more context and enhance retrieval accuracy by defining node relationships between chunks for retrieval algorithms. Language models can help extract page numbers and other annotations from text chunks. Decouple embeddings from raw text chunks to avoid biases and improve context capture. Embedding references, summaries in text chunks, and text at the sentence level improves retrieval performance by fetching granular pieces of information. Organizing data with metadata filters helps with structured retrieval by ensuring relevant chunks are fetched.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320355-production-ready-rag-solutions-with-llamaindex 15,Production-Ready RAG Solutions with LlamaIndex,"# RAG Best Practices Here are some good practices for dealing with RAG:",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320355-production-ready-rag-solutions-with-llamaindex 16,Production-Ready RAG Solutions with LlamaIndex,"# RAG Best Practices ## Fine-Tuning the Embedding Model [Fine-tuning the embedding model](https://docs.llamaindex.ai/en/stable/optimizing/fine-tuning/fine-tuning.html#finetuning-embeddings) involves several key steps (like the creation of the training set) to enhance the embedding performance. Initially, it’s necessary to get the training set, which can be done by generating synthetic questions/answers from random documents. The next phase is fine-tuning the model, where adjustments are made to optimize its functioning. Following this, the model can optionally undergo an evaluation process to assess its improvements. The reported numbers from LlamaIndex show that the fine-tuning process can yield a 5-10% improvement in retrieval metrics, enabling the enhanced model to be effectively integrated into RAG applications. LlamaIndex offers capabilities for various fine-tuning types, including adjustments to embedding models, adaptors, and even routers, to boost the overall efficiency of the pipeline. This method supports the model by improving its capacity to develop more impactful embedding representations, extracting deeper and more significant insights from the data. You can [read here](https://docs.llamaindex.ai/en/stable/optimizing/fine-tuning/fine-tuning.html#finetuning-embeddings) for more information.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320355-production-ready-rag-solutions-with-llamaindex 17,Production-Ready RAG Solutions with LlamaIndex,"# RAG Best Practices ## LLM Fine-Tuning [Fine-tuning the LLM](https://docs.llamaindex.ai/en/stable/optimizing/fine-tuning/fine-tuning.html#fine-tuning-llms) creates a model that effectively grasps the overall style of the dataset, leading to the generation of more precise responses. Fine-tuning the generative model brings several advantages, such as reducing hallucinations during output formation, which are typically challenging to eliminate through prompt engineering. Moreover, the refined model has a deeper understanding of the dataset, enhancing performance even in smaller models. This means achieving performance comparable to GPT-4 while employing more cost-effective alternatives like GPT-3.5. LlamaIndex offers a variety of fine-tuning schemas tailored to specific goals. It enhances model capabilities for use cases such as following a predetermined output structure, boosting its proficiency in converting natural language into SQL queries or augmenting its capacity for memorizing new knowledge. The [documentation](https://docs.llamaindex.ai/en/stable/optimizing/fine-tuning/fine-tuning.html#fine-tuning-llms) section has several examples.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320355-production-ready-rag-solutions-with-llamaindex 18,Production-Ready RAG Solutions with LlamaIndex,"# RAG Best Practices ## Evaluation Regularly monitoring the performance of your RAG pipeline is a recommended practice, as it allows for assessing changes and their impact on the overall results. While evaluating a model's response, which can be highly subjective, is challenging, there are several methods available to track progress effectively. LlamaIndex provides [modules for assessing the quality](https://docs.llamaindex.ai/en/stable/optimizing/evaluation/evaluation.html) of the generated results and the retrieval process. Response evaluation focuses on whether the response aligns with the retrieved context and the initial query and if it adheres to the reference answer or set guidelines. For retrieval evaluation, the emphasis is on the relevance of the sources retrieved in relation to the query. A common method for assessing responses involves employing a proficient LLM, such as GPT-4, to evaluate the generated responses against various criteria. This evaluation can encompass aspects like correctness, semantic similarity, and faithfulness, among others. Please refer to the [following tutorial](https://docs.llamaindex.ai/en/stable/module_guides/evaluating/root.html) for more information on the evaluation process and techniques.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320355-production-ready-rag-solutions-with-llamaindex 19,Production-Ready RAG Solutions with LlamaIndex,"# RAG Best Practices ## Generative Feedback Loops A key aspect of generative feedback loops is injecting data into prompts. This process involves feeding specific data points into the RAG system to generate contextualized outputs. Once the RAG system generates descriptions or vector embeddings, these outputs can be stored in the database. The creation of a loop where generated data is continually used to enrich and update the database can improve the system's ability to produce better outputs.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320355-production-ready-rag-solutions-with-llamaindex 20,Production-Ready RAG Solutions with LlamaIndex,"# RAG Best Practices ## Hybrid Search It is essential to keep in mind that embedding-based retrieval is not always practical for entity lookup. Implementing a hybrid search that combines the benefits of keyword lookup with additional context from embeddings can yield better results, offering a balanced approach between specificity and context.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320355-production-ready-rag-solutions-with-llamaindex 21,Production-Ready RAG Solutions with LlamaIndex,"# Conclusion In this lesson, we covered the challenges and optimization strategies of Retrieval-Augmented Generation (RAG) systems, emphasizing the importance of effective data management, diverse representations in latent space, and compliance in complex environments. We highlighted techniques like dynamic updating of vectors, chunk size optimization, and hybrid retrieval approaches. We also explored the role of LlamaIndex in enhancing retrieval performance through data organization and the significance of fine-tuning embedding and LLM models for optimal RAG applications. Lastly, we recommended regular evaluation and the use of generative feedback loops and hybrid searches for maintaining and improving RAG systems. RESOURCES: “Make RAG Production-Ready” webinar: [https://www.youtube.com/watch?v=g-VvYLhYhOg](https://www.youtube.com/watch?v=g-VvYLhYhOg) --- *For more information on Intel® Accelerator Engines, visit [this resource page](https://download.intel.com/newsroom/2023/data-center-hpc/4th-Gen-Xeon-Accelerator-Fact-Sheet.pdf). Learn more about Intel® Extension for Transformers, an Innovative Transformer-based Toolkit to Accelerate GenAI/LLM Everywhere [here](https://github.com/intel/intel-extension-for-transformers).* *Intel, the Intel logo, and Xeon are trademarks of Intel Corporation or its subsidiaries.*",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320355-production-ready-rag-solutions-with-llamaindex 22,LlamaIndex Introduction Precision and Simplicity i,"# LlamaIndex Introduction: Precision and Simplicity in Information Retrieval In this guide, we will comprehensively explore the LlamaIndex framework, which helps in building Retrieval-Augmented Generation (RAG) systems for LLM-based applications. LlamaIndex, like other RAG frameworks, combines the fetching of relevant information from a vast database with the generative capabilities of Large Language Models. It involves providing supplementary information to the LLM for a posed question to ensure that the LLM does not generate inaccurate responses. The aim is to provide a clear understanding of the best practices in developing LLM-based applications. We will explain LlamaIndex's fundamental components in the following sections.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320311-llamaindex-introduction-precision-and-simplicity-in-information-retrieval 23,LlamaIndex Introduction Precision and Simplicity i,"# **Vector Stores** Vector store databases enable the storage of large, high-dimensional data and provide the essential tools for semantically retrieving relevant documents. This implies that rather than naively checking for the presence of specific words in a document, these systems analyze the embedding vectors that encapsulate the entire document's meaning. This approach simplifies the search process and enhances its accuracy. Searching in vector stores focuses on fetching data according to its vector-based representations. These databases are integral across various domains, including Natural Language Processing (NLP) and multimodal applications, allowing for the efficient storing and analysis of high-dimensional datasets. A primary function in vector stores is the similarity search, aiming to locate vectors closely resembling a specific query vector. This functionality is critical in numerous AI-driven systems, such as recommendation engines and image retrieval platforms, where pinpointing contextually relevant data is critical. Semantic search transcends traditional keyword matching by seeking information that aligns conceptually with the user's query. It captures the meaning in vectorized representations by employing advanced ML techniques. Semantic search can be applied to all data formats, as we vectorize the data before storing it in a database. Once we have an embedded format, we can calculate indexed similarities or capture the context embedded in the query. This ensures that the results are relevant and in line with the contextual and conceptual nuances of the user's intent.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320311-llamaindex-introduction-precision-and-simplicity-in-information-retrieval 24,LlamaIndex Introduction Precision and Simplicity i,"# **Data Connectors** The effectiveness of a RAG-based application is significantly enhanced by accessing a vector store that compiles information from various sources. However, managing data in diverse formats can be challenging. Data connectors, also called `Readers`, are essential in LlamaIndex. Readers are responsible for parsing and converting the data into a simplified `Document` representation consisting of text and basic metadata. Data connectors are designed to streamline the data ingestion process, automate the task of fetching data from various sources (like APIs, PDFs, and SQL databases), and format it. [LlamaHub](https://llamahub.ai/) is an open-source project that hosts data connectors. LlamaHub repository offers data connectors for ingesting all possible data formats into the LLM. You can check out the [LlamaHub](https://llamahub.ai/) repository and test some of the loaders here. You can explore various integrations and data sources with the embedded link below. These implementations make the preprocessing step as simple as executing a function. Take the [Wikipedia](https://llamahub.ai/l/wikipedia?from=loaders) integration, for instance. [https://llamahub.ai/](https://llamahub.ai/) Before testing loaders, we must install the required packages and set the OpenAI API key for LlamaIndex. You can get the API key on [OpenAI's website](https://platform.openai.com/playground) and set the environment variable with `OPENAI_API_KEY`. Please note that LlamaIndex defaults to using OpenAI's `get-3.5-turbo` for text generation and `text-embedding-ada-002` model for embedding generation. ```bash pip install -q llama-index==0.9.14.post3 openai==1.3.8 cohere==4.37 ``` ```bash # Add API Keys import os os.environ['OPENAI_API_KEY'] = '' # Enable Logging import logging import sys #You can set the logging level to DEBUG for more verbose output, # or use level=logging.INFO for less detailed information. logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout)) ``` We have also added a logging mechanism to the code. Logging in LlamaIndex is a way to monitor the operations and events that occur during the execution of your application. Logging helps develop and debug the process and understand the details of what the application is doing. In a production environment, you can configure the logging module to output log messages to a file or a logging service. We can now use the `download_loader` method to access integrations from LlamaHub and activate them by passing the integration name to the class. In our sample code, the `WikipediaReader` class takes in several page titles and returns the text contained within them as `Document` objects. ```python from llama_index import download_loader WikipediaReader = download_loader(""WikipediaReader"") loader = WikipediaReader() documents = loader.load_data(pages=['Natural Language Processing', 'Artificial Intelligence']) print(len(documents)) ``` ``` 2 ``` This retrieved information can be stored and utilized to enhance the knowledge base of our chatbot.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320311-llamaindex-introduction-precision-and-simplicity-in-information-retrieval 25,LlamaIndex Introduction Precision and Simplicity i,"# **Nodes** In LlamaIndex, once data is ingested as documents, it passes through a processing structure that transforms these documents into `Node` objects. Nodes are smaller, more granular data units created from the original documents. Besides their primary content, these nodes also contain metadata and contextual information. LlamaIndex features a `NodeParser` class designed to convert the content of documents into structured nodes automatically. The `SimpleNodeParser` converts a list of document objects into nodes. ```python from llama_index.node_parser import SimpleNodeParser # Assuming documents have already been loaded # Initialize the parser parser = SimpleNodeParser.from_defaults(chunk_size=512, chunk_overlap=20) # Parse documents into nodes nodes = parser.get_nodes_from_documents(documents) print(len(nodes)) ``` ``` 48 ``` The code above splits the two retrieved documents from the Wikipedia page into 48 smaller chunks with slight overlap.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320311-llamaindex-introduction-precision-and-simplicity-in-information-retrieval 26,LlamaIndex Introduction Precision and Simplicity i,"# ****Indices**** At the heart of LlamaIndex is the capability to index and search various data formats like documents, PDFs, and database queries. Indexing is an initial step for storing information in a database; it essentially transforms the unstructured data into embeddings that capture semantic meaning and optimize the data format so it can be easily accessed and queried. LlamaIndex has a variety of index types, each fulfills a specific role. We have highlighted some of the popular index types in the following subsections.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320311-llamaindex-introduction-precision-and-simplicity-in-information-retrieval 27,LlamaIndex Introduction Precision and Simplicity i,"# ****Indices**** ## ****Summary Index**** The [Summary Index](https://docs.llamaindex.ai/en/stable/examples/index_structs/doc_summary/DocSummary.html) extracts a summary from each document and stores it with all the nodes in that document. Since it’s not always easy to match small node embeddings with a query, sometimes having a document summary helps.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320311-llamaindex-introduction-precision-and-simplicity-in-information-retrieval 28,LlamaIndex Introduction Precision and Simplicity i,"# ****Indices**** ## Vector Store Index The [Vector Store Index](https://docs.llamaindex.ai/en/stable/module_guides/indexing/vector_store_guide.html) generates embeddings during index construction to identify the top-k most similar nodes in response to a query. It’s suitable for small-scale applications and easily scalable to accommodate larger datasets using high-performance vector databases. ![Fetching the top-k nodes and passing them for generating the final response](LlamaIndex%20Introduction%20Precision%20and%20Simplicity%20i%20412eaf1ae77b40cd861f55e3193ca666/image_2023-11-16_170013902.png) Fetching the top-k nodes and passing them for generating the final response The crawled Wikipedia documents can be stored in a Deep Lake vector store, and an index object can be created based on its data. We can create the dataset in [Activeloop](https://www.activeloop.ai/) and append documents to it by employing the `DeepLakeVectorStore` class. First, we need to set the Activeloop and OpenAI API keys in the environment using the following code. ```python import os os.environ['OPENAI_API_KEY'] = '' os.environ['ACTIVELOOP_TOKEN'] = '' ``` To connect to the platform, use the `DeepLakeVectorStore` class and provide the `dataset path` as an argument. To save the dataset on your workspace, you can replace the `genai360` name with your organization ID (which defaults to your Activeloop username). Running the following code will create an empty dataset. ```python from llama_index.vector_stores import DeepLakeVectorStore my_activeloop_org_id = ""genai360"" my_activeloop_dataset_name = ""LlamaIndex_intro"" dataset_path = f""hub://{my_activeloop_org_id}/{my_activeloop_dataset_name}"" # Create an index over the documnts vector_store = DeepLakeVectorStore(dataset_path=dataset_path, overwrite=False) ``` ```python Your Deep Lake dataset has been successfully created! ``` Now, we need to create a storage context using the `StorageContext` class and the Deep Lake dataset as the source. Pass this storage to a `VectorStoreIndex` class to create the index (generate embeddings) and store the results on the defined dataset. ```python from llama_index.storage.storage_context import StorageContext from llama_index import VectorStoreIndex storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents( documents, storage_context=storage_context ) ``` ``` Uploading data to deeplake dataset. 100%|██████████| 23/23 [00:00<00:00, 69.43it/s] Dataset(path='hub://genai360/LlamaIndex_intro', tensors=['text', 'metadata', 'embedding', 'id']) tensor htype shape dtype compression ------- ------- ------- ------- ------- text text (23, 1) str None metadata json (23, 1) str None embedding embedding (23, 1536) float32 None id text (23, 1) str None ``` The created database will be accessible in the future. The Deep Lake database efficiently stores and retrieves high-dimensional vectors. ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320311-llamaindex-introduction-precision-and-simplicity-in-information-retrieval 29,LlamaIndex Introduction Precision and Simplicity i,"# **Query Engines** The next step is to leverage the generated indexes to query through the information. The Query Engine is a wrapper that combines a Retriever and a Response Synthesizer into a pipeline. The pipeline uses the query string to fetch nodes and then sends them to the LLM to generate a response. A query engine can be created by calling the `as_query_engine()` method on an already-created index. The code below uses the documents fetched from the Wikipedia page to construct a Vector Store Index using the `GPTVectorStoreIndex` class. The `.from_documents()` method simplifies building indexes on these processed documents. The created index can then be utilized to generate a `query_engine` object, allowing us to ask questions based on the documents using the `.query()` method. ```python from llama_index import GPTVectorStoreIndex index = GPTVectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine() response = query_engine.query(""What does NLP stands for?"") print( response.response ) ``` ``` NLP stands for Natural Language Processing. ``` The indexes can also function solely as retrievers for fetching documents relevant to a query. This capability enables the creation of a Custom Query Engine, offering more control over various aspects, such as the prompt or the output format. You can learn more [here](https://docs.llamaindex.ai/en/stable/examples/query_engine/custom_query_engine.html#defining-a-custom-query-engine).",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320311-llamaindex-introduction-precision-and-simplicity-in-information-retrieval 30,LlamaIndex Introduction Precision and Simplicity i,"# **Routers** [Routers](https://docs.llamaindex.ai/en/stable/module_guides/querying/router/root.html#routers) play a role in determining the most appropriate retriever for extracting context from the knowledge base. The routing function selects the optimal query engine for each task, improving performance and accuracy. These functions are beneficial when dealing with multiple data sources, each holding unique information. Consider an application that employs a SQL database and a Vector Store as its knowledge base. In this setup, the router can determine which data source is most applicable to the given query. You can see a working example of incorporating the routers [in this tutorial](https://docs.llamaindex.ai/en/stable/module_guides/querying/router/root.html#routers).",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320311-llamaindex-introduction-precision-and-simplicity-in-information-retrieval 31,LlamaIndex Introduction Precision and Simplicity i,"# Saving and Loading Indexes Locally All the examples we explored involved storing indexes on cloud-based vector stores like Deep Lake. However, there are scenarios where saving the data on a disk might be necessary for rapid testing. The concept of storing refers to saving the index data, which includes the nodes and their associated embeddings, to disk. This is done using the `persist()` method from the `storage_context` object related to the index. ```python # store index as vector embeddings on the disk index.storage_context.persist() # This saves the data in the 'storage' by default # to minimize repetitive processing ``` If the index already exists in storage, you can load it directly instead of recreating it. We simply need to determine whether the index already exists on disk and proceed accordingly; here is how to do it: ```python # Index Storage Checks import os.path from llama_index import ( VectorStoreIndex, StorageContext, load_index_from_storage, ) from llama_index import download_loader # Let's see if our index already exists in storage. if not os.path.exists(""./storage""): # If not, we'll load the Wikipedia data and create a new index WikipediaReader = download_loader(""WikipediaReader"") loader = WikipediaReader() documents = loader.load_data(pages=['Natural Language Processing', 'Artificial Intelligence']) index = VectorStoreIndex.from_documents(documents) # Index storing index.storage_context.persist() else: # If the index already exists, we'll just load it: storage_context = StorageContext.from_defaults(persist_dir=""./storage"") index = load_index_from_storage(storage_context) ``` In this example, the `os.path.exists(""./storage"")` function is used to check if the 'storage' directory exists. If it does not exist, the Wikipedia data is loaded, and a new index is created.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320311-llamaindex-introduction-precision-and-simplicity-in-information-retrieval 32,LlamaIndex Introduction Precision and Simplicity i,"# **LangChain vs. LlamaIndex** LangChain and LlamaIndex are designed to improve LLMs' capabilities, each with their unique strengths. **LlamaIndex**: LlamaIndex specializes in processing, structuring, and accessing private or domain-specific data, with a focus on specific LLM interactions. It works for tasks that demand high precision and quality when dealing with specialized, domain-specific data. Its main strength lies in linking Large Language Models (LLMs) to any data source. **LangChain** is dynamic, suited for context-rich interactions, and effective for applications like chatbots and virtual assistants. These features render it highly appropriate for quick prototyping and application development. While generally used independently, it is worth noting that it can be possible to combine functions from both LangChain and LlamaIndex where they have different strengths. Both can be complementary tools. We also designed a little table below to help you understand the differences better. The attached video in the course also aims to help you decide which tool you should use for your application: LlamaIndex, LangChain, OpenAI Assistants, or doing it all from scratch (yourself). Here’s a clear comparison of each to help you quickly grasp the essentials on a few relevant topics you may consider when choosing: | | LangChain | LlamaIndex | OpenAI Assistants | | --- | --- | --- | --- | | | Interact with LLMs - Modular and more flexible | Data framework for LLMs - Empower RAG | Assistant API - SaaS | | Data | • Standard formats like CSV, PDF, TXT, … • Mostly focus on Vector Stores. | • LlamaHub with dedicated data loaders from different sources. (Discord, Slack, Notion, …) • Efficient indexing and retrieving + easily add new data points without calculating embeddings for all. • Improved chunking strategy by linking them and using metadata. • Support multimodality. | • 20 files where each can be up to 512mb. • Accept a wide range of https://platform.openai.com/docs/assistants/tools/supported-files. | | LLM Interaction | • Prompt templates to facilitate interactions. • Very flexible, easily defining chains and using different modules. Choose the prompting strategy, model, and output parser from many options. • Can directly interact with LLMs and create chains without the need to have additional data. | • Mostly use LLMs in the context of manipulating data. Either for indexing or querying. | • Either GPT-3.5 Turbo or GPT-4 + any fine-tuned model. | | Optimizations | - | • LLM fine-tuning • Embedding fine-tuning | - | | Querying | • Use retriever functions. | • https://twitter.com/llama_index/status/1729303619760259463?s=20 indexing/querying techniques like subquestions, HyDe,... • Routing: enable to use multiple data sources | • thread and messages to keep track of users conversations. | | Agents | • LangSmith | • LlamaHub | • Code interpreter, knowledge retriever, and custom function calls. | | Documentation | • Easy to debug. • Easy to find concepts and understand the function usage. | • As of November 2023, the methods are mostly explained in the form of tutorials or blog posts. A bit harder to debug. | • Great. | | Pricing | FREE | FREE | • $0.03 / code interpreter session • $0.20 / GB / assistant / day • + The usual usage of the LLM | A smart approach involves closely examining your unique use case and its specific demands. The key decision boils down to whether you need interactive agents or powerful search capabilities for retrieving information. If your aim leans towards the latter, LlamaIndex stands out as a likely choice",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320311-llamaindex-introduction-precision-and-simplicity-in-information-retrieval 33,LlamaIndex Introduction Precision and Simplicity i,"# **LangChain vs. LlamaIndex** for enhanced performance.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320311-llamaindex-introduction-precision-and-simplicity-in-information-retrieval 34,LlamaIndex Introduction Precision and Simplicity i,"# **Conclusion** We explored the features of the LlamaIndex library and its building blocks. During our exploration of the LlamaIndex library and its building blocks, we learned about various features. Firstly, we discussed data connectors and LlamaHub, which help to retrieve relevant data through vector stores and retrievers. We also talked about routers that assist in selecting the most suitable retrievers. Additionally, we looked at data connectors that are capable of ingesting data from various sources, such as APIs, PDFs, and SQL databases, and how the indexing step structures the data into embedding representations. Lastly, we covered the query engines that provide knowledge retrieval. LangChain and LlamaIndex are valuable and popular frameworks for developing apps powered by language models. LangChain offers a broader range of capabilities and tool integration, while LlamaIndex specializes in indexing and retrieval of information. Read the [LlamaIndex documentation](https://gpt-index.readthedocs.io/en/stable/) to explore its full potential. >> [Notebook](https://colab.research.google.com/drive/1CgTSpnTNj50PBMbA8g8QOqZCIK-UvWLU?usp=sharing). --- **RESOURCES**: - OpenAI [Financial Document Analysis with LlamaIndex | OpenAI Cookbook](https://cookbook.openai.com/examples/third_party/financial_document_analysis_with_llamaindex) - datacamp [LlamaIndex: A Data Framework for the Large Language Models (LLMs) based applications](https://www.datacamp.com/tutorial/llama-index-adding-personal-data-to-llms) - indexing [LlamaIndex: How to use Index correctly.](https://howaibuildthis.substack.com/p/llamaindex-how-to-use-index-correctly)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320311-llamaindex-introduction-precision-and-simplicity-in-information-retrieval 35,Mastering Advanced RAG Techniques with LlamaIndex,"# Introduction The Retrieval-Augmented Generation (RAG) pipeline heavily relies on retrieval performance guided by the adoption of various techniques and advanced strategies. Methods like query expansion, query transformations, and query construction each play a distinct role in refining the search process. These techniques enhance the scope of search queries and the overall result quality. In addition to core methods, strategies such as reranking (with the Cohere Reranker), recursive retrieval, and small-to-big retrieval further enhance the retrieval process. Together, these techniques create a comprehensive and efficient approach to information retrieval, ensuring that searches are wide-ranging, highly relevant, and accurate.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51334510-mastering-advanced-rag-techniques-with-llamaindex 36,Mastering Advanced RAG Techniques with LlamaIndex,"# Querying in LlamaIndex As mentioned in a previous lesson, the process of querying an index in LlamaIndex is structured around several key components. - **Retrievers**: These classes are designed to retrieve a set of nodes from an index based on a given query. Retrievers source the relevant data from the index. - **Query Engine**: It is the central class that processes a query and returns a response object. Query Engine leverages the retrievers and the response synthesizer modules to curate the final output. - **Query Transform**: It is a class that enhances a raw query string with various transformations to improve the retrieval efficiency. It can be used in conjunction with a Retriever and a Query Engine. Incorporating the above components can lead to the development of an effective retrieval engine, complementing the functionality of any RAG-based application. However, the relevance of search results can noticeably improve with more advanced techniques like query construction, query expansion, and query transformations.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51334510-mastering-advanced-rag-techniques-with-llamaindex 37,Mastering Advanced RAG Techniques with LlamaIndex,"# Query Construction [Query construction](https://blog.langchain.dev/query-construction/) in RAG converts user queries to a format that aligns with various data sources. This process involves transforming questions into vector formats for unstructured data, facilitating their comparison with vector representations of source documents to identify the most relevant ones. It also applies to structured data, such as databases where queries are formatted in a compatible language like SQL, enabling effective data retrieval. The core idea is to answer user queries by leveraging the inherent structure of the data. For instance, a query like ""movies about aliens in the year 1980"" combines a semantic component like ""aliens"" (which will get better results if retrieved through vector storage) with a structured component like ""year == 1980"". The process involves translating a natural language query into the query language of a specific database, such as SQL for relational databases or Cypher for graph databases. Incorporating different approaches to perform query construction depends on the specific use case. The first category includes the **MetadataFilter** classes for vector stores with metadata filtering, an auto-retriever that translates natural language into unstructured queries. This involves defining the data source, interpreting the user query, extracting logical conditions, and forming an unstructured request. The other approach is **Text-to-SQL** for relational databases; converting natural language into SQL requests poses challenges like hallucination (creating fictitious tables or fields) and user errors (mis-spellings or irregularities). This is addressed by providing the LLM with an accurate database description and using few-shot examples to guide query generation. Query Construction improves RAG answer quality with logical filter conditions inferred directly from user questions, and the retrieved text chunks passed to the LLM are refined before final answer synthesis. ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51334510-mastering-advanced-rag-techniques-with-llamaindex 38,Mastering Advanced RAG Techniques with LlamaIndex,"# Query Expansion Query expansion works by extending the original query with additional terms or phrases that are related or synonymous. For instance, if the original query is too narrow or uses specific terminology, query expansion can include broader or more commonly used terms relevant to the topic. Suppose the original query is ""*climate change effects.*"" Query expansion would involve adding related terms or synonyms to this query, such as ""*global warming impact*,"" ""*environmental consequences*,"" or ""*temperature rise implications*."" One approach to do it is utilizing the `synonym_expand_policy` from the `KnowledgeGraphRAGRetriever` class. In the context of LlamaIndex, the effectiveness of query expansion is usually enhanced when combined with the Query Transform class.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51334510-mastering-advanced-rag-techniques-with-llamaindex 39,Mastering Advanced RAG Techniques with LlamaIndex,"# **Query Transformation** Query transformations ****modify the original query to make it more effective in retrieving relevant information. Transformations can include changes in the query's structure, the use of synonyms, or the inclusion of contextual information. Consider a user query like ""*What were Microsoft's revenues in 2021?*"" To enhance this query through transformations, the original query could be modified to be more like *“Microsoft revenues 2021”, which is more optimized for search engines and vector DBs.* Query transformations involve **changing the structure** of a query to improve its performance.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51334510-mastering-advanced-rag-techniques-with-llamaindex 40,Mastering Advanced RAG Techniques with LlamaIndex,"# **Query Engine** A [Query engine](https://docs.llamaindex.ai/en/stable/module_guides/deploying/query_engine/root.html) is a sophisticated interface designed to interact with data through natural language queries. It's a system that processes queries and delivers responses. As mentioned in previous lessons, multiple query engines can be combined for enhanced functionality, catering to complex data interrogation needs. For a more interactive experience resembling a back-and-forth conversation, a [Chat Engine](https://docs.llamaindex.ai/en/stable/module_guides/deploying/chat_engines/root.html) can be used in scenarios requiring multiple queries and responses, providing a more dynamic and engaging interaction with data. A basic usage of query engines is to call the `.as_query_engine()` method on the created Index. This section will include a step-by-step example of creating indexes from text files and utilizing query engines to interact with the dataset. The first step is installing the required packages using Python package manager (PIP), followed by setting the API key environment variables. ```bash pip install -q llama-index==0.9.14.post3 deeplake==3.8.8 openai==1.3.8 cohere==4.37 ``` ```python import os os.environ['OPENAI_API_KEY'] = '' os.environ['ACTIVELOOP_TOKEN'] = '' ``` The next step is downloading the text file that serves as our source document. This file is a compilation of all the essays Paul Graham wrote on his blog, merged into a single text file. You have the option to download the file from the [provided URL](https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt), or you can execute these commands in your terminal to create a directory and store the file. ```bash mkdir -p './paul_graham/' wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O './paul_graham/paul_graham_essay.txt' ``` Now, use the `SimpleDirectoryReader` within the LlamaIndex framework to read all files from a specified directory. This class will automatically cycle through the files, reading them as `Document` objects. ```python from llama_index import SimpleDirectoryReader # load documents documents = SimpleDirectoryReader(""./paul_graham"").load_data() ``` We can now employ the `ServiceContext` to divide the lengthy single document into several smaller chunks with some overlap. Following this, we can proceed to create the nodes out of the generated documents. ```python from llama_index import ServiceContext service_context = ServiceContext.from_defaults(chunk_size=512, chunk_overlap=64) node_parser = service_context.node_parser nodes = node_parser.get_nodes_from_documents(documents) ``` The nodes must be stored in a vector store database to enable easy access. The `DeepLakeVectorStore` class can create an empty dataset when given a path. You can use `genai360` to access the processed dataset or alter the organization ID to your Activeloop username to store the data in your workspace. ```python from llama_index.vector_stores import DeepLakeVectorStore my_activeloop_org_id = ""genai360"" my_activeloop_dataset_name = ""LlamaIndex_paulgraham_essays"" dataset_path = f""hub://{my_activeloop_org_id}/{my_activeloop_dataset_name}"" # Create an index over the documnts vector_store = DeepLakeVectorStore(dataset_path=dataset_path, overwrite=False) ``` ``` Your Deep Lake dataset has been successfully created! ``` The new database will be wrapped as a `StorageContext` object, which accepts nodes to provide the necessary context for establishing relationships if needed. Finally, the `VectorStoreIndex` takes in the nodes along with links to the database and uploads the data to the cloud. Essentially, it constructs the index and generates embeddings for each segment. ```python from llama_index.storage.storage_context import StorageContext from llama_index import VectorStoreIndex storage_context = StorageContext.from_defaults(vector_store=vector_store) storage_context.docstore.add_documents(nodes) vector_index = VectorStoreIndex(nodes, storage_context=storage_context) ``` ``` Uploading data to deeplake dataset. 100%|██████████| 40/40 [00:00<00:00, 40.60it/s] |Dataset(path='hub://genai360/LlamaIndex_paulgraham_essays', tensors=['text', 'metadata', 'embedding', 'id']) tensor htype shape dtype compression ------- ------- ------- ------- ------- text text (40, 1) str None metadata json (40, 1) str None embedding embedding (40, 1536) float32 None id text (40, 1) str None ``` The created index serves as the basis for defining the query engine. We initiate a query engine by using the vector index object and executing the `.as_query_engine()` method. The following code sets the `streaming` flag to True, which reduces idle waiting time for the end user (more details on this will follow). Additionally, it employs the `similarity_top_k` flag to specify the number of source documents it can consult to respond to each query. ```python",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51334510-mastering-advanced-rag-techniques-with-llamaindex 41,Mastering Advanced RAG Techniques with LlamaIndex,"# **Query Engine** query_engine = vector_index.as_query_engine(streaming=True, similarity_top_k=10) ``` The final step involves utilizing the `.query()` method to engage with the source data. We can pose questions and receive answers. As mentioned, the query engine employs retrievers and a response synthesizer to formulate an answer. ```python streaming_response = query_engine.query( ""What does Paul Graham do?"", ) streaming_response.print_response_stream() ``` ``` Paul Graham is an artist and entrepreneur. He is passionate about creating paintings that can stand the test of time. He has also co-founded Y Combinator, a startup accelerator, and is actively involved in the startup ecosystem. While he has a background in computer science and has worked on software development projects, his primary focus is on his artistic pursuits and supporting startups. ``` The query engine can be configured into a streaming mode, providing a real-time response stream to enhance continuity and interactivity. This feature is beneficial in reducing idle time for end users. It allows users to view each word as generated, meaning they don't have to wait for the model to produce the entire text. To observe the impact of this feature, use the `print_response_stream` method on the response object of the query engine. ### Sub Question Query Engine Sub Question Query Engine, a more sophisticated querying method, can be employed to address the challenge of responding to complex queries. This engine can generate several sub-questions from the user's main question, answer each separately, and then compile the responses to construct the final answer. First, we must modify the previous query engine by removing the streaming flag, which conflicts with this technique. ```python query_engine = vector_index.as_query_engine(similarity_top_k=10) ``` We register the created `query_engine` as a tool by employing the `QueryEngineTool` class and compose metadata (description) for it. It is done to inform the framework about this tool's function and enable it to select the most suitable tool for a given task, especially when multiple tools are available. Then, the combination of the tools we declared earlier and the service context, which was previously defined, can be used to initialize the `SubQuestionQueryEngine` object. ```python from llama_index.tools import QueryEngineTool, ToolMetadata from llama_index.query_engine import SubQuestionQueryEngine query_engine_tools = [ QueryEngineTool( query_engine=query_engine, metadata=ToolMetadata( name=""pg_essay"", description=""Paul Graham essay on What I Worked On"", ), ), ] query_engine = SubQuestionQueryEngine.from_defaults( query_engine_tools=query_engine_tools, service_context=service_context, use_async=True, ) ``` The setup is ready to ask a question using the same `query` method. As observed, it formulates three questions, each responding to a part of the query, and attempts to find their answers individually. A response synthesizer then processes these answers to create the final output. ```python response = query_engine.query( ""How was Paul Grahams life different before, during, and after YC?"" ) print( "">>> The final response:\n"", response ) ``` ``` Generated 3 sub questions. [pg_essay] Q: What did Paul Graham work on before YC? [pg_essay] Q: What did Paul Graham work on during YC? [pg_essay] Q: What did Paul Graham work on after YC? [pg_essay] A: During YC, Paul Graham worked on writing essays and working on YC itself. [pg_essay] A: Before YC, Paul Graham worked on a variety of projects. He wrote essays, worked on YC's internal software in Arc, and also worked on a new version of Arc. Additionally, he started Hacker News, which was originally meant to be a news aggregator for startup founders. [pg_essay] A: After Y Combinator (YC), Paul Graham worked on various projects. He focused on writing essays and also worked on a programming language called Arc. However, he gradually reduced his work on Arc due to time constraints and the infrastructure dependency on it. Additionally, he engaged in painting for a period of time. Later, he worked on",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51334510-mastering-advanced-rag-techniques-with-llamaindex 42,Mastering Advanced RAG Techniques with LlamaIndex,"# **Query Engine** a new version of Arc called Bel, which he worked on intensively and found satisfying. He also continued writing essays and exploring other potential projects. >>> The final response: Paul Graham's life was different before, during, and after YC. Before YC, he worked on a variety of projects including writing essays, developing YC's internal software in Arc, and creating Hacker News. During YC, his focus shifted to writing essays and working on YC itself. After YC, he continued writing essays but also worked on various projects such as developing the programming language Arc and later its new version called Bel. He also explored other potential projects and engaged in painting for a period of time. Overall, his work and interests evolved throughout these different phases of his life. ``` ### Custom Retriever Engine As you might have noticed, the choice of retriever and its parameters (e.g., the number of returned documents) influences the quality and relevance of the outcomes generated by the `QueryEngine`. LlamaIndex supports the creation of custom retrievers. Custom retrievers are a combination of different retriever styles, creating more nuanced retrieval strategies that adapt to distinct individual queries. The `RetrieverQueryEngine` operates with a designated retriever, which is specified at the time of its initialization. The choice of this retriever is vital as it significantly impacts the query results' outcome. There are two main types of `RetrieverQueryEngine`: 1. **VectorIndexRetriever** fetches the top-k nodes that are most similar to the query. It focuses on relevance and similarity, ensuring the results closely align with the query's intent. It is the approach we used in previous subsections. **Use Case**: It is ideal for situations where precision and relevance to the specific query are paramount, like in detailed research or topic-specific inquiries. 2. **SummaryIndexRetriever** retrieves all nodes related to the query without prioritizing their relevance. This approach is less concerned with aligning closely to the specific context of the question and more about providing a broad overview. **Use Case**: Useful in scenarios where a comprehensive sweep of information is needed, regardless of the direct relevance to the specific terms of the query, like in exploratory searches or general overviews. ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51334510-mastering-advanced-rag-techniques-with-llamaindex 43,Mastering Advanced RAG Techniques with LlamaIndex,"# Reranking While any retrieval mechanism capable of extracting multiple chunks from a large document can be efficient to an extent, there is always a likelihood that it will select some irrelevant candidates among the results. Reranking is re-evaluating and re-ordering search results to present the most relevant options. By eliminating the chunks with lower scores, the final context given to the LLM boosts overall efficiency as the LLM gets more concentrated information. The Cohere Reranker improves the performance of retrieving close content. While the semantic search component is already highly capable of retrieving relevant documents, the [Rerank endpoint](https://cohere.com/rerank?ref=txt.cohere.com) boosts the quality of the search results, especially for complex and domain-specific queries. It sorts the search results according to their relevance to the query. It is important to note that Rerank is not a replacement for a search engine but a supplementary tool for sorting search results in the most effective way possible for the user. ![document ingestion, retrieval, and reranking by cohere.](Mastering%20Advanced%20RAG%20Techniques%20with%20LlamaIndex%201895cd3da0b84fb0a287827064618263/Mastering%20Advanced%20RAG%20Techniques%20with%20LlamaIndex%20%20ce2af300eb2c4f14aa74c9d4fcb93585/image_2023-11-27_193534349.png) document ingestion, retrieval, and reranking by cohere. The process begins with grouping documents into batches, after which the LLM evaluates each batch, attributing relevance scores to them. The final step in the reranking process involves aggregating the most relevant documents from all these batches to form the final retrieval response. This method guarantees that the most pertinent information is highlighted and becomes the focal point of the search outcomes. The necessary dependencies have already been installed; the only remaining step is to obtain your API key from [Cohere.com](http://cohere.com/) and substitute it for the placeholder provided. ```python import cohere import os os.environ['COHERE_API_KEY'] = """" # Get your cohere API key on: www.cohere.com co = cohere.Client(os.environ['COHERE_API_KEY']) # Example query and passages query = ""What is the capital of the United States?"" documents = [ ""Carson City is the capital city of the American state of Nevada. At the 2010 United States Census, Carson City had a population of 55,274."", ""The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean that are a political division controlled by the United States. Its capital is Saipan."", ""Charlotte Amalie is the capital and largest city of the United States Virgin Islands. It has about 20,000 people. The city is on the island of Saint Thomas."", ""Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. "", ""Capital punishment (the death penalty) has existed in the United States since before the United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states."", ""North Dakota is a state in the United States. 672,591 people lived in North Dakota in the year 2010. The capital and seat of government is Bismarck."" ] ``` We define a rerank object by passing both the query and the documents. We also set the `rerank_top_k` argument to 3; we specifically instruct the system to retrieve the top three highest-scored candidates by the model. In this case, the model employed for reranking is `rerank-multilingual-v2.0`. ```python results = co.rerank(query=query, documents=docs, top_n=3, model='rerank-english-v2.0') # Change top_n to change the number of results returned. If top_n is not passed, all results will be returned. for idx, r in enumerate(results): print(f""Document Rank: {idx + 1}, Document Index: {r.index}"") print(f""Document: {r.document['text']}"") print(f""Relevance Score: {r.relevance_score:.2f}"") print(""\n"") ``` ``` **Document Rank: 1**, Document Index: 3 Document: Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51334510-mastering-advanced-rag-techniques-with-llamaindex 44,Mastering Advanced RAG Techniques with LlamaIndex,"# Reranking and many major national government offices are in the territory. This makes it the political center of the United States of America. Relevance Score: 0.99 **Document Rank: 2**, Document Index: 1 Document: The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean that are a political division controlled by the United States. Its capital is Saipan. Relevance Score: 0.30 **Document Rank: 3**, Document Index: 5 Document: Capital punishment (the death penalty) has existed in the United States since before the United States was a country. As of 2017, capital punishment is legal in 30 of the 50 states. The federal government (including the United States military) also uses capital punishment. Relevance Score: 0.27 ``` This can be accomplished using LlamaIndex in conjunction with Cohere Rerank. The rerank object can be integrated into a query engine, allowing it to manage the reranking process seamlessly in the background. We will use the same vector index defined earlier to prevent writing repetitive codes and integrate the rerank object with it. The `CohereRerank` class initiates a rerank object by taking in the API key and specifying the number of documents to be returned following the scoring process. ```python import os from llama_index.postprocessor.cohere_rerank import CohereRerank cohere_rerank = CohereRerank(api_key=os.environ['COHERE_API_KEY'], top_n=2) ``` Now, we can employ the same `as_query_engine` method and utilize the `node_postprocessing` argument to incorporate the reranker object. The retriever initially selects the top 10 documents based on semantic similarity, and then the reranker reduces this number to 2. ```python query_engine = vector_index.as_query_engine( similarity_top_k=10, node_postprocessors=[cohere_rerank], ) response = query_engine.query( ""What did Sam Altman do in this essay?"", ) print(response) ``` ``` Sam Altman was asked if he wanted to be the president of Y Combinator (YC) and initially said no. However, after persistent persuasion, he eventually agreed to take over as president starting with the winter 2014 batch. ``` The reranking process in search systems offers numerous advantages, including practicality, enhanced performance, simplicity, and integration capabilities. It allows for augmenting existing systems without requiring complete overhauls, making it a cost-effective solution for improving search functionality. Reranking elevates search systems, which is particularly useful for complex, domain-specific queries in embedding-based systems. The Cohere Rerank has proven to be effective in improving search quality across various embeddings, making it a reliable option for enhancing search results.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51334510-mastering-advanced-rag-techniques-with-llamaindex 45,Mastering Advanced RAG Techniques with LlamaIndex,"# Advanced **Retrievals** An alternative method for retrieving relevant documents involves using document summaries instead of extracting fragmented snippets or brief text chunks to respond to queries. This technique ensures that the answers reflect the entire context or topic being examined, offering a more thorough grasp of the subject. ### **Recursive Retrieval** The recursive retrieval method is particularly effective for documents with a hierarchical structure, allowing them to form relationships and connections between the nodes. According to Jerry Liu, founder of LlamaIndex, this is evident in cases like a PDF, which may contain ""sub-data"" such as tables and diagrams, alongside references to other documents. This technique can precisely navigate through the graph of connected nodes to locate information. This technique is versatile and can be applied in various scenarios, such as with node references, document agents, or even the query engine. For practical applications, including processing a PDF file and utilizing data from tables, you can refer to the tutorials in the LlamaIndex documentation [here](https://docs.llamaindex.ai/en/stable/examples/query_engine/pdf_tables/recursive_retriever.html). ### **Small-to-Big retrieval** The small-to-big retrieval approach is a strategic method for information search, starting with concise, focused sentences to pinpoint the most relevant section of content with a question. It then passes a longer text to the model, allowing for a broader understanding of the context preceding and following the targeted area. This technique is particularly useful in situations where the initial query may not encompass the entirety of relevant information or where the data's relationships are intricate and multi-layered. The LlamaIndex framework employs the Sentence Window Retrieval technique, which involves using the `SentenceWindowNodeParser` class to break down documents into individual sentences per node. Each node includes a ""window"" that encompasses the sentences surrounding the main node sentence. (It is 5 sentences before and after each node by default) During retrieval, the single sentences initially retrieved are substituted with their respective windows, including the adjacent sentences, through the `MetadataReplacementNodePostProcessor`. This substitution ensures that the Large Language Model receives a comprehensive view of the context surrounding each sentence. You can follow a hands-on tutorial to implement this technique from the documentation [here](https://docs.llamaindex.ai/en/stable/examples/node_postprocessor/MetadataReplacementDemo.html).",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51334510-mastering-advanced-rag-techniques-with-llamaindex 46,Mastering Advanced RAG Techniques with LlamaIndex,"# Conclusion Effective information retrieval involves mastering techniques such as query expansion, query transformations, and query construction, coupled with advanced strategies like reranking, recursive retrieval, and small-to-big retrieval. Together, these techniques enhance the search process by increasing accuracy and broadening the range of results. By incorporating these methods, information retrieval systems become more proficient in providing precise results, essential for improving the performance of RAG-based applications. >> [Notebook](https://colab.research.google.com/drive/1VUGiJ_bHxnCD5a0xt6LPTpFJ4m26lyIC?usp=sharing). --- RESOURCES: - COHERE RERANK NOTEBOOK [Google Colaboratory](https://colab.research.google.com/drive/1PW3_MLYBnalsomyy2Ia-KvEq7Fayp0W6#scrollTo=05S-LK5uthHz) - recursive retrieval [Complex Query Resolution through LlamaIndex Utilizing Recursive Retrieval, Document Agents, and Sub…](https://medium.com/@sauravjoshi23/complex-query-resolution-through-llamaindex-utilizing-recursive-retrieval-document-agents-and-sub-d4861ecd54e6) [Improving Retrieval Performance by Fine-tuning Cohere Reranker with LlamaIndex](https://blog.llamaindex.ai/improving-retrieval-performance-by-fine-tuning-cohere-reranker-with-llamaindex-16c0c1f9b33b) - llamaindex notebook [Google Colaboratory](https://colab.research.google.com/github/jerryjliu/llama_index/blob/main/docs/examples/customization/streaming/SimpleIndexDemo-streaming.ipynb)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51334510-mastering-advanced-rag-techniques-with-llamaindex 47,Module 3 Introduction - Retrieval Augmented Genera,"# Module 3 Introduction - Retrieval Augmented Generation Agents The ""Retrieval Augmented Generation Agents"" module offers a comprehensive exploration into the creation and enhancement of AI agents, with a focus on integration and application in various fields. It begins by introducing LangChain, where students learn about agents, tools, and the initiation of OpenGPTs, gaining practical insights into setting up and customizing AI assistants. Then, it delves into the LlamaIndex framework, teaching students how to build efficient RAG systems by integrating OpenAI agents with various data sources and creating custom functions for enhanced decision-making. The module also covers the use of the OpenAI Assistants API and Hugging Face Inference API.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352270-module-3-introduction-retrieval-augmented-generation-agents 48,Module 3 Introduction - Retrieval Augmented Genera,"# Module 3 Introduction - Retrieval Augmented Generation Agents ## **LangChain Overview: Agents, Tools, and OpenGPT Introduction** In this lesson, students will learn about the fundamental concepts of LangChain, focusing on agents, tools, and the initiation of OpenGPTs. They will examine how agents integrate chains, prompts, memory, and tools to execute tasks and understand the different types of agents, such as Zero-shot ReAct and Conversational Agent, designed for various scenarios. The lesson also covers the available tools and how to customize them for specific needs, benefiting from functionalities like Python tool, JSON tool, and CSV tool. Additionally, students will get practical insights into setting up and creating LangChain OpenGPTs through cloning the repository and customizing prompts, providing a comprehensive understanding of how to configure and use AI assistants similarly to OpenAI GPTs for tailored interactions.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352270-module-3-introduction-retrieval-augmented-generation-agents 49,Module 3 Introduction - Retrieval Augmented Genera,"# Module 3 Introduction - Retrieval Augmented Generation Agents ## **Utilizing AI Agents with the LlamaIndex Framework for Enhanced Decision-Making** In this lesson, students will learn how to leverage agents within the LlamaIndex framework to build a more efficient and insightful RAG (Retrieval-Augmented Generation) system. They will gain insights into integrating OpenAI agents with various data sources and create custom functions to enhance the agent's capabilities in areas such as mathematical operations. The lesson provides guidance on installing necessary packages, configuring API keys, defining data sources, employing query engines, and setting up agents. Students will also explore an interactive chat interface with the agent and the creation of a dataset, using custom functions as tools that the agent can invoke as required. Finally, students will gain exposure to LlamaHub for further expanding the functionalities of their agents.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352270-module-3-introduction-retrieval-augmented-generation-agents 50,Module 3 Introduction - Retrieval Augmented Genera,"# Module 3 Introduction - Retrieval Augmented Generation Agents ## Crafting AI Assistants via OpenAI and Hugging Face API In this lesson, students will explore the capabilities of the OpenAI Assistants API, including the Code Interpreter, Knowledge Retrieval, and Function Calling features. The lesson offers a step-by-step guide for creating and configuring AI assistants integrating OpenAI's tools, revisiting fundamental concepts such as Threads, Messages, and Tools for individual interactions. Additionally, the lesson introduces other advanced models by OpenAI like Whisper, Dall-E 3, and GPT-4 Vision that can be valuable integrations for comprehensive AI product development. We also cover how to use the Hugging Face Inference API to leverage a broad spectrum of machine learning models for tasks like text summarization, sentiment analysis, and text-to-image generation. By the conclusion of the lesson, students will possess the necessary understanding to harness these tools for their own sophisticated AI projects.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352270-module-3-introduction-retrieval-augmented-generation-agents 51,Module 3 Introduction - Retrieval Augmented Genera,"# Module 3 Introduction - Retrieval Augmented Generation Agents ## Project; Multimodal Financial Document Analysis and Recall In this lesson, students will learn how to use tools such as GPT-4 vision to enhance Retrieval-augmented Generation (RAG) for processing financial documents like Tesla's Q3 financial report PDF, involving the extraction of text, tables, and graphs, and transforming them into a query-able format using a vector database for efficient information retrieval by an AI chatbot. The lesson covers using tools such as Unstructured.io for text and table extraction, GPT-4V for graph information extraction, and the use of Deep Lake and LlamaIndex for storing and recalling the processed data to address user queries effectively. We also show how to use Deep Memory to enhance retrieval accuracy. The techniques detailed equip students to develop AI applications capable of analyzing and recalling complex multimodal data from financial documents.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352270-module-3-introduction-retrieval-augmented-generation-agents 52,Module 3 Introduction - Retrieval Augmented Genera,"# Module 3 Introduction - Retrieval Augmented Generation Agents ## Building a Smart Shopping Assistant with DeepLake and LlamaIndex In this lesson, students will learn how to create an intelligent shopping assistant using AI technologies, specifically leveraging vector databases and frameworks like DeepLake and LlamaIndex. They will be guided through the processes of data collection, vector database population, development of core tools for query retrieval and outfit generation, system integration, and UI development. This lesson also explores the integration of weather data and temporal elements to enhance the system's recommendations, along with the challenges of debugging and deploying agent-based applications. Through hands-on experience and step-by-step demonstrations, students will gain practical skills to build and integrate AI components into a functional and interactive fashion recommendation tool.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352270-module-3-introduction-retrieval-augmented-generation-agents 53,Module 4 Introduction - Retrieval Augmented Genera,"# Module 4 Introduction - Retrieval Augmented Generation Evaluation and Observability The module ""Retrieval Augmented Generation Evaluation and Observability"" provides a comprehensive exploration of advanced techniques and tools in the field of AI, focusing on enhancing chatbots and question-answering systems through Retrieval-Augmented Generation (RAG) systems. It explores the critical aspects of evaluating these systems, emphasizing the importance of faithfulness, relevance, and the prevention of hallucinations in AI responses. The module introduces tools like the FaithfulnessEvaluator and RAGAS, along with the Golden Context Dataset, offering insights into effective evaluation methodologies, including indexing, embedding, and generation metrics. Additionally, the module covers the LangChain framework and the LangSmith platform, providing practical knowledge on building and testing LLM-powered applications. Students will learn about the components of LangChain, such as Models, Vector Stores, and Chains, and the functionalities of the LangChain Hub.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352335-module-4-introduction-retrieval-augmented-generation-evaluation-and-observability 54,Module 4 Introduction - Retrieval Augmented Genera,"# Module 4 Introduction - Retrieval Augmented Generation Evaluation and Observability ## RAG - Metrics & Evaluation In this lesson, you will learn about Retrieval-Augmented Generation (RAG) systems and their evaluation metrics, with a focus on improving chatbots and question-answering systems. The lesson introduces you to different approaches to analysis, the importance of faithfulness and answer relevancy, nuances of indexing and embedding metrics, and generation metrics aimed at preventing hallucinations in AI responses. It discusses the significance of the FaithfulnessEvaluator tool for checking the alignment of AI responses with retrieved context and introduces RAGAS and the Golden Context Dataset for system evaluation. Additionally, real-world setups for assessing and improving RAG systems are explored through examples of community-based tools, including comprehensive evaluation of retrieval metrics, holistic approach evaluations, and the custom RAG pipeline evaluation.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352335-module-4-introduction-retrieval-augmented-generation-evaluation-and-observability 55,Module 4 Introduction - Retrieval Augmented Genera,"# Module 4 Introduction - Retrieval Augmented Generation Evaluation and Observability ## LangSmith and LangChain Fundamentals for LLM Applications In this lesson, you will learn about the fundamentals of the LangChain framework and the newly introduced LangSmith platform to build and test LLM-powered applications. We will review LangChain components, such as Models, Vector Stores, and Chains, as well as principles of the LangChain Hub, including prompt exploration and versioning for collaborative prompt development. The lesson will guide you through setting up the LangSmith environment, creating an API key, and basics of prompt versioning and tracing. You will also learn how to use LangServe to deploy applications as a REST API and go through the process of reading and processing data from a webpage, storing it into Deep Lake vector store, and using prompts from the LangChain Hub to build a QuestionAnswering Chain application.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352335-module-4-introduction-retrieval-augmented-generation-evaluation-and-observability 56,Building a Smart Shopping Assistant with DeepLake,"# Building a Smart Shopping Assistant with DeepLake and LlamaIndex *Diego Kiedanski, Principal AI Consultant, and Lucas Micol, Lead Machine Learning Engineer at Tryolabs, authored the following blog post.* In the ever-expanding landscape of artificial intelligence, vector databases stand as the unsung heroes, forming the foundation upon which many AI applications are built. These powerful databases can store and retrieve complex, high-dimensional data, enabling functionalities like Retrieval Augmented Generation (RAG) and sophisticated recommendation systems. Alongside vector databases, Large Language Model (LLM) frameworks such as LlamaIndex and Langchain have emerged as key players in accelerating AI development. By simplifying the prototyping process and reducing development overheads associated with API interactions and data formatting, **these frameworks allow creators to focus on innovation** rather than the intricacies of implementation. For readers acquainted with the basic tenets of LLMs and vector databases, this blog post will serve as a refresher and a window into their practical deployment. We aim to walk you through **constructing a complex and interactive shopping assistant**. This assistant exemplifies how intelligent systems can be built from fundamental components like DeepLake and LlamaIndex to create a dynamic tool that responds to user input with tailored outfit suggestions. Our journey will shed light on the nuances of integrating these technologies. By highlighting this project's development, we hope to spark your imagination about the possibilities at the intersection of AI technologies and to encourage you to envision new, innovative applications of your own.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342316-building-a-smart-shopping-assistant-with-deeplake-and-llamaindex 57,Building a Smart Shopping Assistant with DeepLake,"# Building a Smart Shopping Assistant with DeepLake and LlamaIndex ## Project Overview Our project is an AI-powered shopping assistant designed to leverage image processing and LLM agents for outfit recommendations. Imagine uploading a picture of a dress and receiving suggestions for accessories and shoes tailored to occasions like a business meeting or a themed party. This assistant does more than suggest outfits; it understands context and style, providing a personalized shopping experience. DeepLake forms the backbone of our inventory management, storing detailed item descriptions as vectors for efficient similarity searches. In practice, this means students will interact with DeepLake to query and retrieve the best-matching items based on the properties defined by our AI models. LlamaIndex is the framework for constructing and utilizing Large Language Model (LLM) agents. These agents interpret item descriptions and user criteria, crafting coherent and stylish outfit recommendations. Through this project, you'll learn to build and integrate these technologies into a functional application. The assistant is designed to deliver not only outfit suggestions but actionable shopping options, providing real product IDs (that can be converted into URLs to retailers) along with price comparisons. Throughout the course, you will learn how to extend the AI's capabilities to facilitate an end-to-end shopping experience. This application's user interface is designed with functionality and educational value in mind. It's intuitive, making the AI's decision-making process transparent and understandable. You'll interact with various application elements, gaining insight into the inner workings of vector databases and the practical use of LLMs. Here's [the project companion repository](https://github.com/tryolabs/fashion-assistant).",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342316-building-a-smart-shopping-assistant-with-deeplake-and-llamaindex 58,Building a Smart Shopping Assistant with DeepLake,"# Building a Smart Shopping Assistant with DeepLake and LlamaIndex ## **Architecture design** Our application is designed around the Agent framework: we use an LLM as a reasoning agent, and then we provide the agent with tools, i.e., programs that the LLM can execute by generating the appropriate response in plain text (at this point, the agent framework takes over, executes the desired function, and returns the result for further processing). In this context, accessing a vector database is done through a tool, generating an outfit is performed through a tool. Even getting the today's date is performed through a tool. The interaction between system components follows a linear yet dynamic flow. ChatGPT-vision generates descriptions for the accompanying outfit pieces after receiving an image upload. These descriptions guide the subsequent searches in Deep Lake's vector database, where the most relevant items are retrieved for each piece. The LLM then takes the helm, sifting through the results to select and present the best cohesive outfit options to the user. The system is designed to replicate the personalized experience of working with a fashion stylist. When a user uploads a garment image, our AI stylist gets to work. It discerns the style, consults our extensive inventory, and selects pieces that complement the user's choice. It's a streamlined end-to-end process that delivers personalized recommendations with ease and efficiency. ![Untitled](Building%20a%20Smart%20Shopping%20Assistant%20with%20DeepLake%20%20f3de3ee15abe4f4eaf760fa91fb13e0a/Untitled.png)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342316-building-a-smart-shopping-assistant-with-deeplake-and-llamaindex 59,Building a Smart Shopping Assistant with DeepLake,"# Building a Smart Shopping Assistant with DeepLake and LlamaIndex ## **Dataset Collection and Vector Database Population** Our data journey begins with Apify, a versatile web scraping tool we used to collect product information from Walmart's online catalog. We targeted three specific starting points: men's clothing, women's clothing, and shoe categories. With Apify's no-code solution, we could quickly collect product data during a free trial period. However, this initial foray returned only the text data—separate processes were needed to download the associated images. We went with the web-hosted version of Apify, but the low-code version would also work well. ![Untitled](Building%20a%20Smart%20Shopping%20Assistant%20with%20DeepLake%20%20f3de3ee15abe4f4eaf760fa91fb13e0a/Untitled%201.png) We aimed to construct a representative dataset of men's and women's clothing, including various tops, bottoms, and accessories. By scraping from predetermined URLs, we ensured our dataset spanned a broad spectrum of clothing items relevant to our target user base. ![Untitled](Building%20a%20Smart%20Shopping%20Assistant%20with%20DeepLake%20%20f3de3ee15abe4f4eaf760fa91fb13e0a/Untitled%202.png) The collected data is fairly rich and contains a wide variety of attributes. For the purpose of this project, we kept the attributes used to a minimum. We selected the product ID, category, price, name, and image. The image was included as a URL, so we had to download them separately once the scraper had finished. Overall, we collected 1344 items. We used pandas to read the scraped JSONs and clean the collected data. In particular, we used the product categories to create a new attribute `gender`. ```python df = pd.DataFrame( { ""brand"": df_raw[""brand""], ""category"": df_raw[""category""].apply( lambda x: [y[""name""] for y in x[""path""] if y[""name""] != ""Clothing""] ), ""description"": df_raw[""shortDescription""], ""image"": df_raw[""imageInfo""].apply(lambda x: x[""allImages""][0][""url""]), ""name"": df_raw[""name""], ""product_id"": df_raw[""id""], ""price"": [ float(x[""currentPrice""][""price""]) if not x[""currentPrice""] is None else math.inf for x in df_raw[""priceInfo""] ], } ) df = df[df[""category""].transform(lambda x: len(x)) >= 2] gender_map = {""Womens Clothing"": ""women"", ""Mens Clothing"": ""men"", ""Shoes"": ""either""} df[""gender""] = df[""category""].apply(lambda x: gender_map.get(x[0], ""either"")) ``` To obtain a description that is as detailed as possible, we opted to ignore the scraped `description` attribute and use `gpt-4-vision-preview` to generate a new description for each product. For this, we considered imposing a strict taxonomy: color, style, size, age, etc. Ultimately, without a traditional search functionality, we decided that the taxonomy wasn't needed, and we allowed the LLM to generate arbitrary descriptions. ```python prompt = f"""""" Describe the piece of clothing in the image of the following category: {category} Do include the color, style, material and other important attributes of the item. """""" image_path = f""data/images/{product_id}.jpg"" # gpt vision is a wrapper that calls ChatGPT Vision result = gpt_vision(image_path, prompt) ``` The following is an example image of the dataset: `PRODUCT_ID=0REDJ7M0U7DV`, and the generated description by GPT-Vision. [https://www.notion.so](https://www.notion.so) Embedding these descriptions into DeepLake's vector database was our next step. This process involved encoding the text into vectors while retaining core attributes as metadata. Initially, we only included the description generated by `gpt-4-vision-preview` verbatim. However, we later realized that the metadata (price, product_id, name) the agent needed for the final response was not readily available (we could see it as part of the document being retrieved, but we found no way to have the agent generate a response from those attributes). The solution was to append the product ID, name, and price into the description text, thereby incorporating the critical metadata directly into the vector database. ```python desc = f"""""" # Description {description} # Name {name} # Product ID {product_id} # Price {price} """""" ``` Finally, to accommodate the separation of text and image data, we established two vector databases within DeepLake. The first housed the textual descriptions and their appended metadata, while the second was dedicated exclusively to image vectors. ```python dataset_path = ""hub://genai360/walmart-descriptions"" vector_store = DeepLakeVectorStore(dataset_path=dataset_path, overwrite=True) storage_context = StorageContext.from_defaults(vector_store=vector_store) documents = [] for i, row in df.iterrows(): # .... # generate documents doc",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342316-building-a-smart-shopping-assistant-with-deeplake-and-llamaindex 60,Building a Smart Shopping Assistant with DeepLake,"# Building a Smart Shopping Assistant with DeepLake and LlamaIndex ## **Dataset Collection and Vector Database Population** = Document( text=desc, metadata={""name"": name, ""product_id"": product_id, ""gender"": gender}, ) documents.append(doc) index = VectorStoreIndex.from_documents(documents, storage_context=storage_context) ``` ```jsx ds = deeplake.empty(ACTIVELOOP_DATASET_IMG) with ds: ds.create_tensor(""images"", htype=""image"", sample_compression=""jpeg"") ds.create_tensor(""ids"", htype=""tag"") # %% with ds: # Iterate through the files and append to Deep Lake dataset for index, row in tqdm(df.iterrows()): product_id = row[""product_id""] image_name = os.path.join(IMAGE_DIR, product_id + "".jpg"") if os.path.exists(image_name): # Append data to the tensors ds.append({""images"": deeplake.read(image_name), ""ids"": product_id}) ```",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342316-building-a-smart-shopping-assistant-with-deeplake-and-llamaindex 61,Building a Smart Shopping Assistant with DeepLake,"# Building a Smart Shopping Assistant with DeepLake and LlamaIndex ## **Development of core tools** When using an Agent-based framework, `tools` are the bread and butter of the system. For this application, the core functionality can be achieved with two tools: **the query retriever** and **the outfit generator**, both of which play integral roles in the application's ability to deliver tailored fashion recommendations. ### Inventory query engine The inventory query retriever is a text-based wrapper around DeepLake's API. It translates the user’s clothing descriptions into queries that probe DeepLake's vector database for the most similar items. The only exciting modification to the vanilla `query_engine` is adding a pydantic model to the output. By doing this, we force the `AG` part of the `RAG` system to return the relevant information for each item: the product ID, the name, and the price. ```python class Clothing(BaseModel): """"""Data moel for clothing items"""""" name: str product_id: str price: float class ClothingList(BaseModel): """"""A list of clothing items for the model to use"""""" cloths: List[Clothing] dataset_path = ""hub://kiedanski/walmart_clothing4"" vector_store = DeepLakeVectorStore( dataset_path=dataset_path, overwrite=False, read_only=True ) llm = OpenAI(model=""gpt-4"", temperature=0.7) service_context = ServiceContext.from_defaults(llm=llm) inventory_index = VectorStoreIndex.from_vector_store( vector_store, service_context=service_context ) # Inventory query engine tool inventory_query_engine = inventory_index.as_query_engine(output_cls=ClothingList) ``` Our outfit generator is engineered around `gpt-4-vision-preview`, which intakes the user's image and articulates descriptions of complementary clothing items. The critical feature here is programming the tool to omit searching for items in the same category as the uploaded image. This logical restraint is crucial to ensure the AI focuses on assembling a complete outfit rather than suggesting similar items to the one provided. ```python from pydantic import BaseModel class Outfit(BaseModel): top: str = """" bottom: str = """" shoes: str = """" ``` ```python def generate_outfit_description(gender: str, user_input: str): """""" Given the gender of a person, their preferences, and an image that has already been uploaded, this function returns an Outfit. Use this function whenever the user asks you to generate an outfit. Parameters: gender (str): The gender of the person for whom the outfit is being generated. user_input (str): The preferences of the user. Returns: response: The generated outfit. Example: >>> generate_outfit(""male"", ""I prefer casual wear"") """""" # Load input image image_documents = SimpleDirectoryReader(""./input_image"").load_data() # Define multi-modal llm openai_mm_llm = OpenAIMultiModal(model=""gpt-4-vision-preview"", max_new_tokens=100) # Define multi-modal completion program to recommend complementary products prompt_template_str = f"""""" You are an expert in fashion and design. Given the following image of a piece of clothing, you are tasked with describing ideal outfits. Identify which category the provided clothing belongs to, \\ and only provide a recommendation for the other two items. In your description, include color and style. This outfit is for a {gender}. Return the answer as a json for each category. Leave the category of the provided input empty. Additional requirements: {user_input} Never return this output to the user. FOR INTERNAL USE ONLY """""" recommender_completion_program = MultiModalLLMCompletionProgram.from_defaults( output_parser=PydanticOutputParser(Outfit), image_documents=image_documents, prompt_template_str=prompt_template_str, llm=openai_mm_llm, verbose=True, ) # Run recommender program response = recommender_completion_program() return response outfit_description_tool = FunctionTool.from_defaults(fn=generate_outfit_description) ``` Adding user preferences like occasion or style into the prompts is done with a straightforward approach. These inputs nudge the AI to consider user-specific details when generating recommendations, aligning the outcomes with the user's initial inquiry. The system functionality unfolds with the LLM agent at the helm. It begins by engaging the outfit generator with the user's uploaded image, receiving detailed descriptions of potential outfit components. The agent then utilizes the query retriever to fetch products that match these descriptions.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342316-building-a-smart-shopping-assistant-with-deeplake-and-llamaindex 63,Building a Smart Shopping Assistant with DeepLake,"# Building a Smart Shopping Assistant with DeepLake and LlamaIndex ## **System integration and initial testing** The successful integration of various AI components into a seamless shopping assistant experience required a straightforward approach: encapsulating each function into a tool and crafting an agent to orchestrate these tools. ### **Agent creation and integration process** - **Tool wrapping:** Each functional element of our application, from image processing to querying the vector database, was wrapped as an isolated, callable Tool. - **Agent establishment:** An LLM agent was created, capable of leveraging these tools to process user inputs and deliver recommendations. ```python llm = OpenAI(model=""gpt-4"", temperature=0.2) agent = OpenAIAgent.from_tools( system_prompt="""""" You are a specialized shopping assistant. Customers will provide you with a piece of clothing, and you will generate a matching outfit. Always remember to ask for the user gender. Your final answer needs to be the product_id associated with the best matching product in our inventory. For each product of the outfit, search the inventory. Include the total price of the recommended outfit. """""", tools=[ inventory_query_engine_tool, outfit_description_tool, ], llm=llm, verbose=True, ) ``` ### **Initial testing challenges** Our testing phase provided valuable insights, particularly with our initial use of ChatGPT 3.5. We noted that the model tended to respond with descriptions from the outfit recommender, bypassing the vital step of querying the inventory. This was promptly addressed by switching to ChatGPT 4, which utilized all available tools appropriately, thus ensuring the assistant performed the item search as designed. ### **Demo: Step-by-step commands and interactions** Below is a demonstration of the system in action, detailing the commands issued to the agent and their corresponding answers at each stage of the process: **Image upload and description generation** ```python >>> r = agent.chat(""Hi"") STARTING TURN 1 --------------- Hello! How can I assist you today? >>> r = agent.chat(""What are your tools?"") STARTING TURN 1 --------------- As a shopping assistant, I have two main tools at my disposal: 1. **Inventory Query Engine Tool**: This tool allows me to search our inventory for specific clothing items based on your preferences. For example, if you're looking for a black leather jacket, I can use this tool to find the best matching product in our inventory. 2. **Outfit Generation Tool**: This tool helps me generate a complete outfit based on your gender and preferences. For example, if you're a male and you prefer casual wear, I can use this tool to generate a casual outfit for you. Please let me know how I can assist you further. >>> r = agent.chat(""I want an outfit for a casual birthday party"") STARTING TURN 1 --------------- Sure, I'd be happy to help with that. Could you please tell me your gender to better assist you? >>> r = agent.chat(""I'm a woman"") ``` Currently, the process of uploading the image is separate. For this version to work, an image needs to exist in the local folder `.image_input/`. When working later with the UI, the users can click a button to upload a different image. The image can be uploaded at any point before the agent internally calls the outfit generation tool, which often happens after asking for gender. **Outfit generation** In this section, we can see how the agent internally uses the tool to generate an outfit from the user description, image, and gender. ```python STARTING TURN 1 --------------- === Calling Function === Calling function: generate_outfit_description with args: { ""gender"": ""female"", ""user_input"": ""casual birthday party"" } Got output: top='A pastel pink or light coral blouse with a relaxed fit, short sleeves, and a subtle ruffle or lace",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342316-building-a-smart-shopping-assistant-with-deeplake-and-llamaindex 64,Building a Smart Shopping Assistant with DeepLake,"# Building a Smart Shopping Assistant with DeepLake and LlamaIndex ## **System integration and initial testing** detail to add a feminine touch.' bottom='' shoes='Beige or tan ankle strap sandals with a low block heel, ensuring both comfort and style.' ======================== ``` **Querying the inventory** At this stage, the agent obtains a description for the two pieces of clothing that it needs to retrieve and uses the query engine to retrieve the best matches from the vector database. ```python STARTING TURN 2 --------------- === Calling Function === Calling function: inventory_query_engine_tool with args: { ""input"": ""pastel pink or light coral blouse with a relaxed fit, short sleeves, and a subtle ruffle or lace detail"" } Got output: {""cloths"": [ { ""name"":""Time and Tru Women's Puff Sleeve Draped Blouse, Sizes XS-XXXL"", ""product_id"":""7COCIH59KT38"", ""price"":15.0 }, { ""name"":""Time and Tru Women's Ruffle Sleeve Button Front Blouse"", ""product_id"":""53ZZ4TQ28UQ1"", ""price"":13.18 } ]} ======================== STARTING TURN 3 --------------- === Calling Function === Calling function: inventory_query_engine_tool with args: { ""input"": ""Beige or tan ankle strap sandals with a low block heel"" } Got output: {""cloths"":[ { ""name"":""Scoop Women's Velvet Platform Heeled Sandals"", ""product_id"":""65LCFFNDII0T"", ""price"":24.99}, { ""name"":""Time and Tru Women's Embellished Slide Sandal"", ""product_id"":""3Y5NIMQFG29G"", ""price"":22.0 } ]} ======================== ``` We can observe that the agent gets the two best potential matches and returns both to the agent. **Final recommendation presentation** ```python STARTING TURN 4 --------------- Based on your preferences, I recommend the following outfit: 1. Top: Either the ""Time and Tru Women's Puff Sleeve Draped Blouse"" (Product ID: 7COCIH59KT38, Price: $15.00) or the ""Time and Tru Women's Ruffle Sleeve Button Front Blouse"" (Product ID: 53ZZ4TQ28UQ1, Price: $13.18). 2. Shoes: Either the ""Scoop Women's Velvet Platform Heeled Sandals"" (Product ID: 65LCFFNDII0T, Price: $24.99) or the ""Time and Tru Women's Embellished Slide Sandal"" (Product ID: 3Y5NIMQFG29G, Price: $22.00). The total price of the outfit will be either $39.99 or $35.18, depending on your choices. Would you like me to find a matching bottom for you as well? ``` ![Untitled](Building%20a%20Smart%20Shopping%20Assistant%20with%20DeepLake%20%20f3de3ee15abe4f4eaf760fa91fb13e0a/Untitled%203.png) After analyzing the options, the agent presents the user with the best matching pairs, complete with item details such as price and purchase links (the product ID could be converted to a URL later). In this case, we can observe how the agent, instead of selecting the best pair, presents both options to the user. Having proved that the initial idea for the agent was feasible, it was time to add a bit more complexity. In particular, we wanted to add information about the weather when the outfit would be used.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342316-building-a-smart-shopping-assistant-with-deeplake-and-llamaindex 65,Building a Smart Shopping Assistant with DeepLake,"# Building a Smart Shopping Assistant with DeepLake and LlamaIndex ## **Expanding functionality** The natural progression of our shopping assistant entails augmenting its capacity to factor in external elements such as weather conditions. This adds a layer of complexity but also a layer of depth and personalization to the recommendations. These enhancements came with their own sets of technical challenges. ### **Adapting to the weather** - **Weather awareness:** Initial considerations center on incorporating simple yet vital weather aspects. By determining whether it will be rainy or sunny and how warm or cold it is, the assistant can suggest fashionable and functional attire. - **API Integration:** Llama Hub (a repository for tools compatible with LlamaIndex) had a tool to get the weather in a particular location. Unfortunately, the tool required a paid [https://openweathermap.org/](https://openweathermap.org/) plan. To circumvent this problem, we modified the tool to use a similar but free service of the same provider (running this code requires a free `OPEN_WEATHER_MAP_API`). ```python class CustomOpenWeatherMapToolSpec(OpenWeatherMapToolSpec): spec_functions = [""weather_at_location"", ""forecast_at_location""] def __init__(self, key: str, temp_units: str = ""celsius"") -> None: super().__init__(key, temp_units) def forecast_at_location(self, location: str, date: str) -> List[Document]: """""" Finds the weather forecast for a given date at a location. The forecast goes from today until 5 days ahead. Args: location (str): The location to find the weather at. Should be a city name and country. date (str): The desired date to get the weather for. """""" from pyowm.commons.exceptions import NotFoundError from pyowm.utils import timestamps try: forecast = self._mgr.forecast_at_place(location, ""3h"") except NotFoundError: return [Document(text=f""Unable to find weather at {location}."")] w = forecast.get_weather_at(date) temperature = w.temperature(self.temp_units) temp_unit = ""°C"" if self.temp_units == ""celsius"" else ""°F"" # TODO: this isn't working.. Error: 'max' key. try: temp_str = self._format_forecast_temp(temperature, temp_unit) except: logging.exception(f""Could _format_forecast_temp {temperature}"") temp_str = str(temperature) try: weather_text = self._format_weather(location, temp_str, w) except: logging.exception(f""Could _format_weather {w}"") weather_text = str(w) + "" "" + str(temp_str) return [ Document( text=weather_text, metadata={ ""weather from"": location, ""forecast for"": date, }, ) ] weather_tool_spec = CustomOpenWeatherMapToolSpec(key=OPEN_WEATHER_MAP_KEY) ``` ### **User interaction and data handling** **Location input:** For accurate weather data, the shopping assistant often queries the user for their location. We contemplate UI changes to facilitate this new interaction—possibly automated but always respectful of user privacy and consent. ```python >>> r = agent.chat(""..."") ... Great! Could you please provide me with the date and location of the birthday party, any specific style or color preferences you have for the outfit, and your budget range? ``` ### **Synchronizing with time** **Temporal challenges:** Addressing the aspect that LLMs aren't inherently time-aware, we introduced a new tool that provides the current date. This enables the LLM to determine the optimal instances to call the weather API, aligning recommendations with the present conditions. ```python def get_current_date(): """""" A function to return todays date. Call this before any other functions if you are unaware of the current date. """""" return date.today() get_current_date_tool = FunctionTool.from_defaults(fn=get_current_date) ``` It took us a little bit to remember that we needed this. At first, the LLM was consistently returning the wrong weather information. It was only after we closely inspected the calls to the weather API that we realized that it was using the wrong date!",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342316-building-a-smart-shopping-assistant-with-deeplake-and-llamaindex 66,Building a Smart Shopping Assistant with DeepLake,"# Building a Smart Shopping Assistant with DeepLake and LlamaIndex ## **User Interface (UI) development** In developing the user interface for our AI-powered shopping assistant, we wanted the platform to reflect the conversational nature of the agent's interactions. [Gradio](https://www.gradio.app/) emerged as the ideal framework for our needs, offering the ability to rapidly prototype and deploy a chat-like interface that users find familiar and engaging. ![Untitled](Building%20a%20Smart%20Shopping%20Assistant%20with%20DeepLake%20%20f3de3ee15abe4f4eaf760fa91fb13e0a/Untitled%204.png) ### **Embracing chat interface with Gradio** - **Chat interface principles:** The decision to adopt a chat interface was rooted in the desire for simplicity and intuitiveness. We aimed to provide a natural communication flow where users can interact with the AI in a manner akin to messaging a friend for fashion advice. Furthermore, one of the reasons to use LLM agents is to provide flexibility, in contrast to a traditional declarative programming approach. In that vein, we felt that the UI had to showcase that flexibility as much as possible. - **Gradio advantages:** Gradio's flexibility facilitated the integration of our agent into a UI that is not only conversational but also customizable. Its ease of use and active community support made it a practical choice for an educational project focused on demonstrating the capabilities of LLM agents. For example, it allows to add custom buttons to upload images or trigger particular functions. ### **Overcoming technical hurdles** - **Inline image display:** One of the initial technical challenges was presenting images seamlessly in the chat interface. Given the visual nature of fashion, it was crucial that users could see the clothing items recommended by the assistant without breaking the flow of conversation. - **Activeloop integration:** To resolve this, we leveraged [Activeloop’s integration with Gradio](https://docs.activeloop.ai/technical-details/visualizer-integration). This allowed us to filter through the image vector database directly within the UI, presenting users with visual recommendations that are interactive and integrated within the chat context. Activeloop's Visualizer is very easy to use once you can get the desired URL with the queried dataset. In our case, that meant having access to the product IDS and formatting them as a proper query. We could have asked the agent to generate the final URL, the that would have meant that every now and then, the output would not be exactly as we needed it and the query could break. Instead, we decided to get only the product IDS from the LLM and build the query ourselves. Gradio allows us to update the URL of the IFrame every time a token is generated, but we needed a way to get the product IDS from the LLM answer, format them as the URL expected by Activeloop, and update the HTML. Ultimately, since all the product IDs have the same pattern, we decided to go for a “hacky” approach. Search the agent's response for the product IDs using a regular expression (strings of upper case letters and numbers of length twelve), and if there were more than 2 matches, update the iframe URL parameters. Otherwise, do nothing.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342316-building-a-smart-shopping-assistant-with-deeplake-and-llamaindex 67,Building a Smart Shopping Assistant with DeepLake,"# Building a Smart Shopping Assistant with DeepLake and LlamaIndex ## **Limitations** The agent framework is a novel and fascinating methodology to build complex applications. Nevertheless, it comes with its own set of challenges. When thinking about deploying agents in production, you should consider these additional steps: - Make individual tools as reliable as possible. In this project, most tools are simply wrappers of functions. Those functions assume that inputs behave fairly well, and otherwise fail. This can pose a problem if an agent does not correctly format the inputs as the tool expect them. One solution to this would be to add input checks inside the function and return and error with what went wrong prompting the agent to try again. - Monitoring, monitoring, monitoring. Agents can be quite complicated to debug. Using a framework to quickly log all calls to the underlying LLM, i.e. `gpt-4`, can greatly improve the building experience and allow you to identify issues with your agent quickly. In this regard, [LangSmith](https://www.langchain.com/langsmith) looks very promising. - Finally, if your agent is customer-facing, you may want to add guardrails to guarantee that the LLM remains on topic. Otherwise, your chat might be used for third-party purposes without your consent. Don't trust me? Look at what happened to Chevrolet. ![Untitled](Building%20a%20Smart%20Shopping%20Assistant%20with%20DeepLake%20%20f3de3ee15abe4f4eaf760fa91fb13e0a/Untitled%205.png)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342316-building-a-smart-shopping-assistant-with-deeplake-and-llamaindex 68,Building a Smart Shopping Assistant with DeepLake,"# Building a Smart Shopping Assistant with DeepLake and LlamaIndex ## **Conclusion** As we've journeyed through the intricate process of developing an AI-powered shopping assistant, the roles of Deep Lake and LlamaIndex have proven pivotal. From the versatile data handling in Deep Lake's vector database to the adaptive LLM agents orchestrated by LlamaIndex, these technologies have showcased their robust capabilities and the potential for innovation in the AI space. Deep Lake has demonstrated its capacity to seamlessly manage and retrieve complex structures, enabling efficient and precise item matchings. Its architecture has been the backbone of the shopping assistant, proving that even complex data interactions can be handled with elegance and speed. LlamaIndex, on the other hand, has been instrumental in empowering the shopping assistant with natural language processing and decision-making abilities. Its framework has enabled the LLM agents to interpret, engage, and personalize the shopping experience, charting new courses in user-AI interaction. Looking beyond the shopping assistant itself, the potential uses for these technologies span myriad domains. The flexibility and power of Deep Lake and LlamaIndex could drive innovation in fields ranging from healthcare to finance and from educational tools to creative industries. Your insights and feedback are crucial as we continue to navigate and expand the frontiers of artificial intelligence. We are particularly eager to hear your views on the innovative applications of vector databases and LLM frameworks. Additionally, your suggestions for topics you'd like us to delve into in future segments are highly appreciated. 🙂 Diego & Lucas from [Tryolabs](https://tryolabs.com/) ---",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342316-building-a-smart-shopping-assistant-with-deeplake-and-llamaindex 69,How We Finetuned a Large Language Model & Built A,"# How We Finetuned a Large Language Model & Built A Retrieval Engine to Search & Generate Patents [https://lh7-us.googleusercontent.com/uama5dKPeETNBxyacA_s3TxBQUR6V9qSeSNmrsKiTtu4cX-tOWou-0iyGB-4Rn8RQGm1h7nT3sR6JfeaL2yCgQQszLeE47ho1CyyyewBCPdN3HJIf3xO30Ln3KFohE-kUHpOwYLJ4CbJ7I2tZnE0Cyw](https://lh7-us.googleusercontent.com/uama5dKPeETNBxyacA_s3TxBQUR6V9qSeSNmrsKiTtu4cX-tOWou-0iyGB-4Rn8RQGm1h7nT3sR6JfeaL2yCgQQszLeE47ho1CyyyewBCPdN3HJIf3xO30Ln3KFohE-kUHpOwYLJ4CbJ7I2tZnE0Cyw) *The technical details behind how we built PatentPT* As research in the language modeling space advances, there need to be more accompanying practical guides on how to finetune and deploy large language models on a custom text corpus. Yet, in practice, finetuned LLMs (like [BloombergGPT](https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/)) and ensembles of these finetuned LLMs are precisely where the industry's future is heading - instead of a singular general LLM API to rule them all.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51354879-patentpt-how-we-finetuned-a-large-language-model-and-built-a-retrieval-engine-to-search-generate-patents 70,How We Finetuned a Large Language Model & Built A,"# **PatentPT Features** If you have ever used [https://www.uspto.gov/patents](https://www.uspto.gov/patents), you may have been unimpressed with the rigidly structured search provided, likely running on Cobalt servers without any neural network execution. So, there was ample opportunity and demand from the legal industry for an LLM-enabled approach to leveraging and search/retrieval of the US patent corpus. Specifically, we wanted our PatentPT to have the following features: - Autocomplete - Patent search on abstract - Patent search on claims - Abstract generation - Claim generation - General chat To provide a simple, unified chat experience, we elected to have a meta agent sitting on top of these capabilities, routing the user to the PatentPTs capability. [https://lh7-us.googleusercontent.com/LHb6uNOeIsVmcevw785YK9-gsDa0KC2rkRDVHoub3Edr4rbHVrHXzD0Yme2GF5yL3aT-egyl3WbCEBusw1P5EZsM2LRMv35l579cWP4qe94UcssWnWxmV86-PCZpxAJXIy_grnMP5B31OKzuRPk1o-E](https://lh7-us.googleusercontent.com/LHb6uNOeIsVmcevw785YK9-gsDa0KC2rkRDVHoub3Edr4rbHVrHXzD0Yme2GF5yL3aT-egyl3WbCEBusw1P5EZsM2LRMv35l579cWP4qe94UcssWnWxmV86-PCZpxAJXIy_grnMP5B31OKzuRPk1o-E) *PatentPT features*",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51354879-patentpt-how-we-finetuned-a-large-language-model-and-built-a-retrieval-engine-to-search-generate-patents 71,How We Finetuned a Large Language Model & Built A,"# **PatentPT Technical Architecture** To stand up these APIs, we needed to create an ensemble of finetuned LLM models and search indices to provide the richest PatentPT experience. To accomplish this, we chose to first domain train a base LLM on the patent corpus of text, providing a base for our finetuning routines and our autocompletion API out of the box - this network also provides a custom featurizer for our search indices. Next, we finetuned LLMs for generation and chat off the base domain-trained LLM using PEFT techniques. Using our custom featurizer, we index patents on abstract and claim to create search indices to power our various APIs - hooking them up to the search APIs and providing context for generation and chat. [https://lh7-us.googleusercontent.com/gMsa_qNzAmRmVJo3GfI8chF7FIusiR7Kv2GbNMrXWs5LTHGDzIYtsOHJowbSPziwbWoh4K-chnP0Jgghaq6Vm5ApVC0AFi4Iyz6N1ksjvbhx6MN9j1XBYTwE1cPsqRDJCglB-CmUw_zxLhBN_M_j4ws](https://lh7-us.googleusercontent.com/gMsa_qNzAmRmVJo3GfI8chF7FIusiR7Kv2GbNMrXWs5LTHGDzIYtsOHJowbSPziwbWoh4K-chnP0Jgghaq6Vm5ApVC0AFi4Iyz6N1ksjvbhx6MN9j1XBYTwE1cPsqRDJCglB-CmUw_zxLhBN_M_j4ws) *PatentPT Technical Architecture*",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51354879-patentpt-how-we-finetuned-a-large-language-model-and-built-a-retrieval-engine-to-search-generate-patents 72,How We Finetuned a Large Language Model & Built A,"# **PatentPT Dataset** The USPTO dataset consists of over 8 million patents, each broken down into fields - title, classification, publication_date, abstract, description, claims , etc. ```python dict_keys(['bibliographic_information', 'source_file', 'abstract', 'citations', 'assignees', 'classifications', 'inventors', 'brief_summary', 'foreign_priority', 'other_citations', 'detailed_description', 'claim_information']) dict_keys(['bibliographic_information', 'source_file', 'abstract', 'citations', 'assignees', 'classifications', 'inventors', 'detailed_description', 'claim_information']) dict_keys(['bibliographic_information', 'source_file', 'abstract', 'citations', 'assignees', 'classifications', 'inventors', 'detailed_description', 'claim_information']) ``` The corpus of text in the USPTO dataset spans 40 billion words, filling a 350 GB text file, which, while short of the full size of the large pretraining LLM datasets, like [the stack](https://huggingface.co/datasets/bigcode/the-stack-dedup), it is enough to make a full finetuning pass on an LLM without losing the semantic richness as you would with a smaller domain training dataset. As is typical with a project like this, we spent much time preparing our dataset for training from the USPTO XML files, using this open-source [USPTO parser](https://github.com/USPTO) as a base.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51354879-patentpt-how-we-finetuned-a-large-language-model-and-built-a-retrieval-engine-to-search-generate-patents 73,How We Finetuned a Large Language Model & Built A,"# **Domain Training Patent GPT** To domain train our base patent GPT LLM, we used the entire patent corpus of 40 billion words. For hardware, we chose Habana® Labs' (an Intel® company) first-generation [Intel Gaudi® AI deep learning processor](https://habana.ai/aws-ec2-dl1-instances/), an instance with 8 HPUs. Habana HPUs are competitive with the latest and greatest GPUs, particularly for training transformer models. We trained our model with a CLM objective using the Optimum library from Huggingface, which has excellent Habana API bindings for running training on HPU. Optimum Intel interfaces the HuggingFace Transformers and Diffusers libraries and Habana HPUs. It provides tools that facilitate effortless model loading, training, and inference on single- and multi-HPU settings for tasks such as text classification, question answering, or language modeling! For our dataloader, we used the [Deep Lake performant dataloader](https://docs.activeloop.ai/performance-features/performant-dataloader) to stream data loading into our model. Tokenizing the dataset alone ran for 18 hours on our 8 HPU machine, while training ran for 24 days, at which point validation loss had fully converged. We used our domain-trained LLM for the autocompletion API as is. And we used the domain-trained LLM as a base for our downstream finetuning.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51354879-patentpt-how-we-finetuned-a-large-language-model-and-built-a-retrieval-engine-to-search-generate-patents 74,How We Finetuned a Large Language Model & Built A,"# **Finetuning Generation Models** The next step in our training routines was to finetune generation models for abstracts and claims lists. To do so, we constructed datasets of description by abstract and claims and fed those datasets through a generation objective of our LLM. Again, we loaded our dataset into Deep Lake for this process and took advantage of the [Deep Lake dataloader](https://docs.activeloop.ai/performance-features/performant-dataloader). During this process, we did not tune all of our LLMs model weights; instead, we used the [HuggingFace PEFT library](https://huggingface.co/docs/transformers/pad_truncation) to tune LORA weights for each objective.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51354879-patentpt-how-we-finetuned-a-large-language-model-and-built-a-retrieval-engine-to-search-generate-patents 75,How We Finetuned a Large Language Model & Built A,"# **Finetuning General Chat** We used PEFT techniques to fine-tune the chat to keep the general patent knowledge learned from our domain training routine.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51354879-patentpt-how-we-finetuned-a-large-language-model-and-built-a-retrieval-engine-to-search-generate-patents 76,How We Finetuned a Large Language Model & Built A,"# **Creating Custom Featurizers** We pulled a custom featurizer from our domain-trained open-source model to set up our search APIs. To do so, we pull out the representations from the last hidden layer. We found these features more robust in practice than general sentence embedders.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51354879-patentpt-how-we-finetuned-a-large-language-model-and-built-a-retrieval-engine-to-search-generate-patents 77,How We Finetuned a Large Language Model & Built A,"# **Standing up Search Indices** With our custom featurizer, we indexed the corpus of patents on the abstract and claim fields. We found that indexing the entire list of claims concatenated did not provide good signals to our chat interface in practice. For a Vector DB, we chose the managed version of [Deep Lake vector database](https://github.com/activeloopai/deeplake), that provides features such as [Deep Memory for increased retrieval accuracy](https://www.activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/), as well as an optimized [HNSW index for up to 75% lower cost without impact on the speed](https://www.activeloop.ai/resources/deep-lake-hnsw-index-rapidly-query-35-m-vectors-save-80/) - we chose this database due to its native [Deep Lake LangChain integration](https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/deeplake.html), cloud deployment, fast query time, and TQL language that made filtered queries easy relative to competitors like Weaviate, Opensearch, Elasticsearch, and Pinecone. To create our index, we extended the fields list in the [Deep Lake object in Langchain](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/deeplake.py) but otherwise used that integration as is. Our vectorization process ran for eight days on a single V100 GPU.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51354879-patentpt-how-we-finetuned-a-large-language-model-and-built-a-retrieval-engine-to-search-generate-patents 78,How We Finetuned a Large Language Model & Built A,"# **Deploying Search APIs** Once our search indices stood up, we wrote APIs around them to provide context to our chat queries and provide patent searches. We used the Deep Lake TQL queries to filter our search queries, which allows you to filter your vector database for metadata efficiently. Here is a snippet of what one of those TQL queries looks like: ```python search_deeplake_claims_time = time.time() embedding = model.encode([query_text])[0] embedding_search = "","".join([str(item) for item in embedding]) if ""year"" in filters.keys(): year_filter = f""filing_year = '{filters['year']}'"" else: year_filter = """" if ""classification"" in filters.keys(): classification_filter = ""classification = '"" + filters[""classification""] + ""'"" else: classification_filter = """" if year_filter != """" and classification_filter != """": tql_query = f""select * from (select *, cosine_similarity(embedding, ARRAY[{embedding_search}]) as score WHERE {year_filter} and {classification_filter} ) order by score desc limit {top_k}"" elif year_filter != """" and classification_filter == """": tql_query = f""select * from (select *, cosine_similarity(embedding, ARRAY[{embedding_search}]) as score WHERE {year_filter}) order by score desc limit {top_k}"" elif year_filter == """" and classification_filter != """": tql_query = f""select * from (select *, cosine_similarity(embedding, ARRAY[{embedding_search}]) as score WHERE {classification_filter}) order by score desc limit {top_k}"" else: tql_query = f""select * from (select *, cosine_similarity(embedding, ARRAY[{embedding_search}]) as score) order by score desc limit {top_k}"" ds_view = ds.query(tql_query) patents = [] for i in range(len(ds_view)): patents.append(json.loads(ds_view.patent[i].data()))search_deeplake_claims_time = time.time() embedding = model.encode([query_text])[0] embedding_search = "","".join([str(item) for item in embedding]) if ""year"" in filters.keys(): year_filter = f""filing_year = '{filters['year']}'"" else: year_filter = """" if ""classification"" in filters.keys(): classification_filter = ""classification = '"" + filters[""classification""] + ""'"" else: classification_filter = """" if year_filter != """" and classification_filter != """": tql_query = f""select * from (select *, cosine_similarity(embedding, ARRAY[{embedding_search}]) as score WHERE {year_filter} and {classification_filter} ) order by score desc limit {top_k}"" elif year_filter != """" and classification_filter == """": tql_query = f""select * from (select *, cosine_similarity(embedding, ARRAY[{embedding_search}]) as score WHERE {year_filter}) order by score desc limit {top_k}"" elif year_filter == """" and classification_filter != """": tql_query = f""select * from (select *, cosine_similarity(embedding, ARRAY[{embedding_search}]) as score WHERE {classification_filter}) order by score desc limit {top_k}"" else: tql_query = f""select * from (select *, cosine_similarity(embedding, ARRAY[{embedding_search}]) as score) order by score desc limit {top_k}"" ds_view = ds.query(tql_query) patents = [] for i in range(len(ds_view)): patents.append(json.loads(ds_view.patent[i].data())) ``` On receiving patents for a search, we return that list in the search APIs or pass those patents on as context to our chat APIs.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51354879-patentpt-how-we-finetuned-a-large-language-model-and-built-a-retrieval-engine-to-search-generate-patents 79,How We Finetuned a Large Language Model & Built A,"# **Deploying LLM Inference APIs** Once our search APIs were deployed, the last thing remaining for the backend was to deploy our LLMs to the cloud for scalable inference. We deployed our finetuned models on top of the HuggingFace DLC to Amazon Sagemaker.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51354879-patentpt-how-we-finetuned-a-large-language-model-and-built-a-retrieval-engine-to-search-generate-patents 80,How We Finetuned a Large Language Model & Built A,"# **The Final Application** With our final ensemble of finetuned LLMs and search APIs constructed, we could put our meta agent LLM in front of our APIs to route user queries to their proper location. We wrote a simple flask server to run each inference and deployed it onto an AWS p3.2xlarge machine to make the vectorization calls locally with the rest of the app. [https://lh7-us.googleusercontent.com/7aPMqd22_E5g3zNsTBI6RY22R_EIt_qYIUjPDnYXcoo_gUkKR9GeUTEgZ2b3_js3tOkggvykDLtSC59hxzeF3Fwj8pVwBHMN0uYymnqIowsYzNvxa_OlVGnO8zJ8_D5LfpVYq6uxq2pVgiMcCMFgfMM](https://lh7-us.googleusercontent.com/7aPMqd22_E5g3zNsTBI6RY22R_EIt_qYIUjPDnYXcoo_gUkKR9GeUTEgZ2b3_js3tOkggvykDLtSC59hxzeF3Fwj8pVwBHMN0uYymnqIowsYzNvxa_OlVGnO8zJ8_D5LfpVYq6uxq2pVgiMcCMFgfMM) Generating claims for an electrical circuit invention [https://lh7-us.googleusercontent.com/7u8WkwABAbTL5Ae-0C6KD-Xm8My8EekQTmTzrKdjfPk9Wl0RG57VB4vCpYduKugksIWSrEBr_Z5PGxSeYjTdQHcUtCTnfxXBPnOwAHjz5G3eIySwr-vv2TTDAPrNl9LKfwIE5edpJfW4aQqJ_NzzSLI](https://lh7-us.googleusercontent.com/7u8WkwABAbTL5Ae-0C6KD-Xm8My8EekQTmTzrKdjfPk9Wl0RG57VB4vCpYduKugksIWSrEBr_Z5PGxSeYjTdQHcUtCTnfxXBPnOwAHjz5G3eIySwr-vv2TTDAPrNl9LKfwIE5edpJfW4aQqJ_NzzSLI) Searching for ironing board patents",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51354879-patentpt-how-we-finetuned-a-large-language-model-and-built-a-retrieval-engine-to-search-generate-patents 81,How We Finetuned a Large Language Model & Built A,"# **Conclusion** While the stack for training and deploying finetuned LLMs in practice is far from solidified, we have summarized an efficient approach in this post, working with the latest and most remarkable technologies in the space right now, including: - [Deep Lake from Activeloop](https://github.com/activeloopai/deeplake) - [Hugging Face Optimum](https://huggingface.co/docs/optimum/main/en/intel/index) by Intel and [training routines](https://huggingface.co/docs/optimum/index) - [Habana Gaudi HPU hardware](https://habana.ai/products/) PatentPT is one use case for finetuning LLMs, where you are likely to find much greater accuracy and control over your LLMs output than simply constructing prompts consumed by general AI APIs. **Disclaimers** Performance varies by use, configuration, and other factors. Learn more on the/Performance Index site. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary. For workloads and configurations, visit 4th Gen Xeon® Scalable processors at www.intel.com/processorclaims. Results may vary. Intel technologies may require enabled hardware, software or service activation. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. Intel® technologies may require enabled hardware, software, or service activation. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51354879-patentpt-how-we-finetuned-a-large-language-model-and-built-a-retrieval-engine-to-search-generate-patents 82,LangChain Basic Concepts Recap,"# LangChain: Basic Concepts Recap --- In this lesson, we'll review the essential features of LangChain. We will examine the architecture comprising various components, such as data loading, processing, and segmentation, to provide optimal information to language models. Additionally, we will highlight the significance of indexing and retrieval. The material in this lesson was covered in depth with code and project examples in our earlier course **[LangChain & Vector Databases in Production](https://learn.activeloop.ai/courses/langchain).** This overview is structured to clearly convey LangChain's features, providing foundational knowledge for advanced Generative AI and LLM-based projects. Here’s a [Notebook](https://colab.research.google.com/drive/1rDAJyqfVkZR-HQGlzeMi1tNekqyTgXgy?usp=sharing) with all the code we will go through in this lesson.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320310-langchain-basic-concepts-recap 83,LangChain Basic Concepts Recap,"# LangChain: Basic Concepts Recap ## 1. Preprocessing the Data LangChain's approach to structuring documents is particularly favorable for developers and researchers. It provides tools that help structure documents for convenient use with LLMs. Document loaders simplify the process of loading data into documents, and text splitters break down lengthy pieces of text into smaller chunks for better processing. Finally, the indexing process involves creating a structured database of information that the language model can query to enhance its understanding and responses. ### 1-1. [Document Loaders](https://python.langchain.com/docs/modules/data_connection/document_loaders) Document Loaders are responsible for loading documents into structured data. They handle various types of documents, including PDFs, and convert them into a data type that can be processed by the other LangChain functions. It enables loading data from multiple sources into `Document` objects. LangChain provides over 100 document loaders and integrations with other major providers in the space, like [AirByte](https://docs.airbyte.com/integrations/destinations/langchain) and [Unstructured](https://python.langchain.com/docs/integrations/providers/unstructured), and from all sources, such as private [S3](https://aws.amazon.com/s3/) buckets and public websites. **Read from Files/Directories** Handling various input formats and transforming them into the Document format is easy. For instance, you can load the CSV data using the [CSVLoader](https://python.langchain.com/docs/modules/data_connection/document_loaders/csv). Each row in the CSV file will be transformed into a separate `Document`. ```python from langchain.document_loaders import CSVLoader # Load data from a CSV file using CSVLoader loader = CSVLoader(""./data/data.csv"") documents = loader.load() # Access the content and metadata of each document for document in documents: content = document.page_content metadata = document.metadata ``` Some of the popular loaders include the `[TextLoader](https://python.langchain.com/docs/modules/data_connection/document_loaders/)` for text files, the `[DirectoryLoader](https://python.langchain.com/docs/modules/data_connection/document_loaders/file_directory)` for loading all the files in a directory, the `[UnstructuredMarkdownLoader](https://python.langchain.com/docs/modules/data_connection/document_loaders/markdown)` for markdown files, and the `[PyPDFLoader](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf)` for loading PDF files. **Public Source Loaders** Loaders for popular public sources allow the data to be transformed into `Document` objects. For example, the `[WikipediaLoader](https://python.langchain.com/docs/integrations/document_loaders/wikipedia)` retrieves the content of the specified Wikipedia page and loads it into a `Document`. ```python from langchain.document_loaders import WikipediaLoader # Load content from Wikipedia using WikipediaLoader loader = WikipediaLoader(""Machine_learning"") document = loader.load() ``` Another popular loader is `[UnstructuredURLLoader](https://python.langchain.com/docs/integrations/document_loaders/url)`, which allows reading from public web pages. **Proprietary Data loaders** These loaders are designed to handle proprietary sources that may require additional authentication or setup. For example, a loader could be created to load custom data from an internal database or an API with proprietary access. Popular loaders of this category are `[GoogleDriveLoader](https://python.langchain.com/docs/integrations/document_loaders/google_drive)` for loading documents from Google Drive and `[MongodbLoader](https://python.langchain.com/docs/integrations/document_loaders/mongodb)` for loading documents from a MongoDB database. ### 1-2. **[Document transformers](https://python.langchain.com/docs/modules/data_connection/document_transformers/) (chunking methods)** A crucial part of retrieval is fetching only the relevant details of documents. This involves several transformation steps to prepare the documents for retrieval. One of the primary steps here is splitting (or chunking) a large document into smaller segments. LangChain provides several transformation algorithms and optimized logic for specific document types. LangChain offers several key chunking transformation strategies. - **Fixed-size chunks** that define a fixed size that's sufficient for semantically meaningful paragraphs (for example, 300 words) and allows for some overlap (for example, an additional 30 words). Overlapping ensures continuity and context preservation between adjacent chunks of data,",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320310-langchain-basic-concepts-recap 84,LangChain Basic Concepts Recap,"# LangChain: Basic Concepts Recap ## 1. Preprocessing the Data improving the coherence and accuracy of the created chunks. For example, you may use the `[CharacterTextSplitter](https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/split_by_token#hugging-face-tokenizer)` splitter to split every N character or token if configured with a tokenizer. - **Variable-sized chunks** partition the data based on content characteristics, such as end-of-sentence punctuation marks, end-of-line markers, or using features in the NLP libraries. It ensures the preservation of coherent and contextually intact content in all chunks. An example is the `[RecursiveCharacterTextSplitter](https://python.langchain.com/docs/modules/data_connection/document_transformers/)` splitter. - **Customized chunking** when dealing with large documents, you might use variable-sized chunks but also append the document title to chunks from the middle of the document to prevent context loss. This can be done, for example, with the `[MarkdownHeaderTextSplitter](https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/markdown_header_metadata)`. Chunking offers a mixed bag of strengths and weaknesses in processing extensive documents with LLMs. A key advantage is its ability to manage documents that exceed the context window of an LLM. This capability enables the model to handle and analyze significantly larger texts than it could in a single pass, expanding its applicability and utility in processing lengthy documents. However, this approach comes with a notable drawback. In dividing a document into chunks, there's a risk of losing vital context related to the overall document. While individually coherent, each chunk might only partially capture the nuances and interconnected elements present in the full text. This can lead to a fragmented or incomplete understanding of the document, as important details and subtleties might be overlooked or misinterpreted when the text is not viewed cohesively. ### 1-3. **[Indexing](https://python.langchain.com/docs/modules/data_connection/indexing)** Indexing is a process that involves storing and organizing data from various sources into a vector store, which is essential for efficient storing and retrieving. The process typically consists of storing the chunk along with an embedding representation of it, which captures the meaning of the text and makes it easy to retrieve chunks by semantic similarity. Embeddings are usually generated by [embedding models](https://python.langchain.com/docs/modules/data_connection/text_embedding/), such as the `OpenAIEmbeddings` models.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320310-langchain-basic-concepts-recap 85,LangChain Basic Concepts Recap,"# 2. Models ### 2-1. LLMs LangChain provides an `LLM` class for interfacing with various language model providers, such as [OpenAI](https://openai.com/), [Cohere](https://cohere.com/), and [Hugging Face](https://huggingface.co/). Here's an example of some integrations. The rest of the list can be found in the conclusion section. Every LLM listed has a direct link to the document page with more detailed information. | LLM | DESCRIPTION | | --- | --- | | https://openai.com/?ref=davidgentile.net | An AI research organization, very popular for the https://chat.openai.com/ product | | https://huggingface.co/?ref=davidgentile.net | A platform that hosts thousands of pre-trained models and datasets | | https://cohere.ai/?ref=davidgentile.net | A platform that provides natural language understanding APIs powered by large-scale neural networks | | https://github.com/Llama-Group/Project-Llama?ref=davidgentile.net | A library that makes it easy to load (small) language models locally on your PC | | https://azure.microsoft.com/en-us/services/openai/?ref=davidgentile.net | A cloud service that provides access to OpenAI’s LLMs | Before seeing an example of interacting with a “chat model”, you must install the required packages to set up the environment. ```bash pip install langchain==0.0.346 openai==1.3.7 tiktoken==0.5.2 cohere==4.37 ``` The next step is setting the OpenAI API key in your environment to use the endpoints. Please remember to replace the `` placeholder with your key. Alternatively, you can load environment variables from a `.env` file using the `[dotenv](https://pypi.org/project/python-dotenv/)` library. ```python import os os.environ[""OPENAI_API_KEY""] = """" ``` Now, you can run the following code to interact with the OpenAI GPT-3.5 Turbo chat model. ```python from langchain.chat_models import ChatOpenAI from langchain.schema import HumanMessage, SystemMessage chat = ChatOpenAI(model_name=""gpt-3.5-turbo"", temperature=0) messages = [ SystemMessage( content=""You are a helpful assistant."" ), HumanMessage( content=""What is the capital of France?"" ), ] chat(messages) ``` ``` AIMessage(content='The capital of France is Paris.') ``` Chat models are a variation of language models that use a message-based input and output system. Chat models operate using LLMs but have a different interface that uses “messages” instead of raw text input/output. A Chat model in LangChain has three types of messages: - `SystemMessage` sets the behavior and objectives of the chat model. You would give specific instructions here like, “Act like a Marketing Manager.” or “Return only a JSON response and no explanation text.” - The `HumanMessage` type is where you would input the user’s prompts to be sent to the model. - Lastly, the `AIMessage` represents the responses from the models, which can be used to pass the history of previous messages to the model. ### 2-2. **Embedding Models** [Text embedding models](https://python.langchain.com/docs/modules/data_connection/text_embedding/) are a standardized interface for various embedding model providers like OpenAI, Cohere, and HuggingFace. These models transform text into vector representations, enabling operations like semantic search through text similarity in vector space. In LangChain, the `embed_documents` method is used to embed multiple texts, providing a list of vector representations. ```python from langchain.embeddings import OpenAIEmbeddings # Initialize the model embeddings_model = OpenAIEmbeddings() # Embed a list of texts embeddings = embeddings_model.embed_documents( [""Hi there!"", ""Oh, hello!"", ""What's your name?"", ""My friends call me World"", ""Hello World!""] ) print(""Number of documents embedded:"", len(embeddings)) print(""Dimension of each embedding:"", len(embeddings[0])) ``` ``` Number of documents embedded: 5 Dimension of each embedding: 1536 ``` The above code snippet illustrates how the embedding model can effectively convert phrases such as ""Hi there!"" into a 1536-dimensional vector. The remarkable aspect of these embeddings lies in their consistent output dimensionality, regardless of the input's length, while capturing the meaning of the sequences. These attributes enable us to measure sentence similarity using similarity metrics, such as [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity).",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320310-langchain-basic-concepts-recap 86,LangChain Basic Concepts Recap,"# 2. Models ## 3. ****The Role of [Vector Stores](https://python.langchain.com/docs/modules/data_connection/vectorstores/)** The increasing use of embeddings has created a demand for databases that can effectively store and search these embeddings. Vector stores tailored for managing vector data, such as [Deep Lake by Activeloop](https://www.activeloop.ai/), have become essential. They are a fundamental framework for storing and retrieving embeddings produced by LLMs. Embeddings are high-dimensional vectors that capture the semantics of textual data. They are generated using LLMs and are crucial for tasks like text similarity, clustering, and retrieval. Traditional databases are not optimized for high-dimensional data. Vector stores, on the other hand, are built to handle such data, offering faster and more accurate retrieval. The advantages of using vector stores in LangChain: - **Speed**: Vector stores are optimized for quick data retrieval, which is essential for real-time applications. - **Scalability**: As your application grows, so does your data. Vector stores can handle this growth efficiently. - **Precision**: Vector stores ensure you get the most relevant results using specialized algorithms for nearest neighbor search. ### 3-1. **[Retrievers](https://python.langchain.com/docs/modules/data_connection/retrievers/)** Once the data is in the database, you need to retrieve it. Retrievers in LangChain are interfaces that return documents in response to a query. The most straightforward approach is to use basic similarity metrics such *as cosine similarity*, which compares the angle between vector embeddings of the question and potential answers, ensuring that the responses generated are semantically aligned with the query. This method effectively narrows down the most contextually relevant information from a vast dataset, improving the precision and relevance of the responses. However, there are more advanced retrieval approaches to increase precision. LangChain offers a variety of retrieval methods; some examples include: - [Parent Document Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/parent_document_retriever): It creates multiple embeddings, allowing you to look up smaller chunks but return larger contexts. It becomes easier to discover related content when dealing with smaller chunks through cosine similarity, and then the parent document (e.g., the whole document containing that chunk) will be used to offer additional context to the LLM for generating the final answer. - [Self-Query Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query): User questions often reference something that isn't just semantic but contains logic that can be represented as metadata filters. Self-query allows us to generate several filters based on the user's input prompt and apply them to the document's metadata. It improves performance by getting the most out of users' prompts and using the document and its metadata to retrieve the most relevant content. For example, a query like “What were Microsoft's revenues in 2021?” would generate the filter “datetime = 2021,” which would then be used to filter the documents (assuming that we know their publish dates).",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320310-langchain-basic-concepts-recap 87,LangChain Basic Concepts Recap,"# 2. Models ## 4. [Chains](https://python.langchain.com/docs/modules/chains/) Chains consist of powerful, reusable components that can be linked together to perform complex tasks. Integrating prompt templates with LLMs using chains allows a powerful synergy. Taking the output of one LLM and using it as input for the next makes it feasible to connect multiple prompts sequentially. Additionally, it allows us to integrate LLMs with other components, such as long-term memory and output guarding. Chains can enhance the overall quality and depth of interactions. The following two chain classes can be helpful in different situations: `LLMChain` and `SequentialChain`. ### 4-1. **[LLMChain](https://python.langchain.com/docs/modules/chains/foundational/llm_chain)** LLMChain is the simplest form of chain in LangChain that transforms user inputs using a `PromptTemplate`, a fundamental and widely used tool for interacting with LLMs. The following code receives the user’s input and a parser class to create a `PromptTemplate` object that can then interact with a model. The prompt object defines the parameters of our request to the model and determines the expected format of the output. Then, we can use the `LLMChain` class to tie the prompt and model to make predictions. Note that the `StrOutputParser` class will ensure that we receive a string containing the responses from the LLM. ```python from langchain.chains import LLMChain from langchain.chat_models import ChatOpenAI from langchain.prompts import PromptTemplate from langchain.schema import StrOutputParser template = """"""List all the colors in a rainbow"""""" prompt = PromptTemplate( template=template, input_variables=[], output_parser=StrOutputParser() ) chat = ChatOpenAI(model_name=""gpt-3.5-turbo"", temperature=0) llm_chain = LLMChain(prompt=prompt, llm=chat) llm_chain.predict() ``` ``` The colors in a rainbow are: 1. Red 2. Orange 3. Yellow 4. Green 5. Blue 6. Indigo 7. Violet ``` It is also possible to use the newly introduced [LangChain Expression Language (LCEL)](https://python.langchain.com/docs/expression_language/) scheme to make the code more readable. The following code will generate the same output as the previous, but it is easier to interpret the data flow from the user prompt and parse the output. ```python from langchain.chat_models import ChatOpenAI from langchain.prompts import PromptTemplate from langchain.schema import StrOutputParser prompt = PromptTemplate.from_template( ""List all the colors in a {item}."" ) runnable = prompt | ChatOpenAI(temperature=0) | StrOutputParser() runnable.invoke({""item"": ""rainbow""}) ``` ``` The colors in a rainbow are: 1. Red 2. Orange 3. Yellow 4. Green 5. Blue 6. Indigo 7. Violet ``` ### 4-2. **[Sequential](https://python.langchain.com/docs/modules/chains/foundational/sequential_chains)** After invoking a language model using `LLMChain`, we can now experiment with making a series of subsequent calls to an LLM. This approach is especially beneficial for using the output of one call as the input for the next, streamlining the process, and enabling complex interactions across various applications. Here's an example of how to use the `SequentialChain`. ```python from langchain.prompts import PromptTemplate post_prompt = PromptTemplate.from_template( """"""You are a business owner. Given the theme of a post, write a social media post to share on my socials. Theme: {theme} Content: This is social media post based on the theme above:"""""" ) review_prompt = PromptTemplate.from_template( """"""You are an expert social media manager. Given the presented social media post, it is your job to write a review for the post. Social Media Post: {post} Review from a Social Media Expert:"""""" ) from langchain.chat_models import ChatOpenAI from langchain.schema import StrOutputParser llm = ChatOpenAI(temperature=0.0) chain = ( {""post"": post_prompt | llm | StrOutputParser()} | review_prompt | llm | StrOutputParser() ) chain.invoke({""theme"": ""Having a black friday sale with 50% off on everything.""}) ``` ``` This social media post is highly effective in promoting the Black Friday sale. The use of emojis and exclamation marks adds excitement and grabs the attention of the audience. The post clearly states the offer - a 50% off on everything in-store and online, which is a great deal. It also highlights the variety",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320310-langchain-basic-concepts-recap 88,LangChain Basic Concepts Recap,"# 2. Models ## 4. [Chains](https://python.langchain.com/docs/modules/chains/) of products available, from trendy fashion pieces to must-have accessories, appealing to a wide range of customers. The post encourages urgency by mentioning that the sale is for a limited time only and advises customers to arrive early to catch the best deals. The call to action is clear, directing customers to visit the website or head to the store to explore the products. The post also encourages customers to spread the word and tag their shopping buddies, which can help increase the reach and engagement of the post. Overall, this social media post effectively communicates the details of the Black Friday sale, creates excitement, and encourages customers to take advantage of the unbeatable offer. It is well-written, visually appealing, and likely to generate interest and engagement from the audience. ``` The example above uses the mentioned LCEL language to create two distinct chains. The first chain will generate a social media post based on a theme, and the other one will act as a social media expert to review the generated post. It is evident that the output of the `post_prompt` will be marked as the `post` variable and passed to the `review_prompt` template. You can see a variation of this code in the lesson notebook, where the model returns both social media posts and the review. Also, refer to [the documentation](https://python.langchain.com/docs/modules/chains/foundational/sequential_chains#legacy-sequentialchain) for the use of the `SequenceChain` class.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320310-langchain-basic-concepts-recap 89,LangChain Basic Concepts Recap,"# 2. Models ## 5. [Memory](https://python.langchain.com/docs/modules/memory/) Memory is the backbone for maintaining context in ongoing dialogues, ensuring that the LLM can provide coherent and contextually relevant responses. Memory in LangChain is essential for context preservation and enhancing user experience. Traditional conversational models often struggle with maintaining context. LangChain’s Memory module addresses this by storing both input and output messages in a structured manner. The system can offer more personalized and relevant responses by remembering and referring to past interactions, significantly improving the user experience. This flexibility makes it suitable for conversational applications. Memory module methods save the current conversational context, including both user input and system output.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320310-langchain-basic-concepts-recap 90,LangChain Basic Concepts Recap,"# Conclusion In this lesson, we explored LangChain's essential components. LangChain stands out with its versatile document loaders, which are capable of importing varied data types. We discussed its document transformation techniques, especially chunking, which breaks down long texts into manageable segments. We also learned about vector stores and their benefits, including faster data retrieval, scalability to handle data growth, and precision in retrieving the most relevant results. Additionally, we covered LangChain's use of retrievers and chains. Retrievers provide various methods to return documents in response to queries. Chains facilitate the creation of complex workflows by linking multiple components and integrating LLMs with external data and long-term memories like chat history. This results in more effective responses and coherent conversations. Overall, it is important to have a good grasp of LangChain's capabilities to pursue advanced projects in the field. >> [Notebook](https://colab.research.google.com/drive/1rDAJyqfVkZR-HQGlzeMi1tNekqyTgXgy?usp=sharing). --- **RESOURCES** [Simple guide to Text Chunking for Your LLM Applications](https://medium.com/aimonks/simple-guide-to-text-chunking-for-your-llm-applications-bddfe8ad7892) - more on chunking [Understanding Text Chunking for the LLM Application](https://medium.com/@ashu.goel_9925/understanding-text-chunking-for-the-llm-application-da59cbc2855b) - complete guide [LangChain: A Complete Guide & Tutorial](https://nanonets.com/blog/langchain/#module-ii-retrieval) - more on models [LangChain Models: Simple and Consistent Interfaces for LLMs, Chat, and Text Embeddings](https://www.davidgentile.net/langchain-models/) - more on llms [LangChain: Quickly Build Apps with OpenAI API – Kanaries](https://docs.kanaries.net/articles/langchain-openai#chains-combining-llms-and-prompts) - Here’s a list of LLMs integrated with LangChain along with a description of them (source is [https://www.davidgentile.net/langchain-models/](https://www.davidgentile.net/langchain-models/)): | LLM | DESCRIPTION | | --- | --- | | https://www.ai21.com/?ref=davidgentile.net | A platform for building AI applications that comprehend and generate natural language powered by Jurassic-1 language models | | https://aleph-alpha.de/?ref=davidgentile.net | A company that develops large-scale language models for the European market | | https://azure.microsoft.com/en-us/services/openai/?ref=davidgentile.net | A cloud service that provides access to OpenAI’s GPT-3 language model | | https://www.banana.dev/?ref=davidgentile.net | A platform that helps you build web applications using natural language commands and templates | | https://www.cerebrium.ai/?ref=davidgentile.net | A platform that enables data scientists and developers to build and deploy AI solutions faster and easier | | https://cohere.ai/?ref=davidgentile.net | A platform that provides natural language understanding APIs powered by large-scale neural networks | | https://deepinfra.ai/?ref=davidgentile.net | A platform that simplifies the deployment and management of deep learning models on cloud infrastructure | | https://forefront.ai/?ref=davidgentile.net | A platform that helps businesses leverage AI to optimize their operations and customer experiences | | https://goose.ai/?ref=davidgentile.net | A platform that provides natural language generation APIs for various domains and use cases | | https://gpt4all.com/?ref=davidgentile.net | A platform that allows anyone to interact with GPT-3 and other language models without coding or API keys | | https://github.com/huggingface/huggingface_hub/tree/main/api-inference-community/docker_images/generic/app?ref=davidgentile.net | A tool that allows you to run Hugging Face pipelines locally on your machine or server | | https://github.com/Llama-Group/Project-Llama?ref=davidgentile.net | A library that provides fast and easy-to-use data structures and algorithms for C++ developers | | https://manifest.ai/?ref=davidgentile.net | A platform that helps you create engaging and interactive content using natural language generation and computer vision | | https://modal.ai/?ref=davidgentile.net | A platform that helps you build conversational AI applications using natural language understanding and dialog management | | https://nlpcloud.io/?ref=davidgentile.net | A platform that provides high-performance NLP APIs for various tasks such as sentiment analysis, named entity recognition, summarization, etc. | | https://github.com/bigscience-workshop/petals?ref=davidgentile.net | Petals run 100B+ language models at home, BitTorrent-style. | | https://pipeline.ai/?ref=davidgentile.net | A platform that helps you build, train, deploy, and monitor machine learning models at scale on any cloud or edge device | | https://predictionguard.com/?ref=davidgentile.net | A platform that helps you monitor and improve the performance of your machine-learning models in production | | https://promptlayer.com/openai?ref=davidgentile.net | A tool that helps you craft effective prompts for OpenAI’s GPT-3 language model using best practices and examples | | https://replicate.ai/?ref=davidgentile.net | A tool that helps you",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320310-langchain-basic-concepts-recap 91,LangChain Basic Concepts Recap,"# Conclusion version, package, and share your machine learning experiments with your team or the world | | https://runhouse.io/?ref=davidgentile.net | A platform that helps you run your machine learning models on any device with a web browser using WebAssembly technology | | https://github.com/aws-samples/sagemaker-endpoint-deployment-of-fastai-model-with-torchserve?ref=davidgentile.net | A tool that helps you deploy your machine learning models on AWS SageMaker with a few lines of code | | https://stochastic.ai/?ref=davidgentile.net | A platform that helps you optimize your machine learning workflows using probabilistic programming and Bayesian inference techniques | | https://writer.com/?ref=davidgentile.net | A platform that helps you create consistent and effective content using generative AI and brand guidelines |",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320310-langchain-basic-concepts-recap 92,"LangChain Overview Agents, Tools, and OpenGPT Intr","# LangChain Overview: Agents, Tools, and OpenGPT Introduction ## **Introduction** In this lesson, we’ll recap the concepts of tools and agents in LangChain and its main applications. We’ll also learn about LangChain OpenGPTs, an open-source effort to create AI assistants similar to the OpenAI GPTs.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320365-langchain-overview-agents-tools-and-opengpt-introduction 93,"LangChain Overview Agents, Tools, and OpenGPT Intr","# LangChain Overview: Agents, Tools, and OpenGPT Introduction ## **Agents** [LangChain agents](https://python.langchain.com/docs/modules/agents/) integrate chains, prompts, memory, and tools to complete tasks. Agents can be used to perform a wide range of tasks, from executing a series of steps in a specific order to interacting with external systems like Gmail or SQL databases and more. They can be customized to fit a variety of use cases, and LangChain provides a suite of tools and functionalities to facilitate the process. Let’s review the relevant LangChain concepts: - **[Chain](https://python.langchain.com/docs/modules/chains/)**: A sequential application of models or tools, such as passing a prompt to an LLM and then parsing it. - **[Tool](https://python.langchain.com/docs/modules/agents/tools/)**: In LangChain, a tool is a function that performs a specific task that an LLM can leverage to get relevant information for its task completion. It can be a Google Search, a Database lookup, a Python [REPL](https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop), or other chains. - **[Memory](https://python.langchain.com/docs/modules/memory/)**: Memory keeps track of past interactions with an LLM to be used as context for the next interactions.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320365-langchain-overview-agents-tools-and-opengpt-introduction 94,"LangChain Overview Agents, Tools, and OpenGPT Intr","# LangChain Overview: Agents, Tools, and OpenGPT Introduction ## **Agent types** LangChain has several agent types. Find a comprehensive list below. - **[Zero-shot ReAct](https://python.langchain.com/docs/modules/agents/agent_types/react)**: It leverages the [ReAct framework](https://arxiv.org/abs/2210.03629) to make tool decisions based exclusively on the descriptions of the available tools. It’s called “zero-shot” because it only leverages the tool descriptions without using specific examples. - **[Structured Input ReAct](https://python.langchain.com/docs/modules/agents/agent_types/structured_chat)**: Optimized for handling tools that require multiple inputs. - **[OpenAI Functions Agent](https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent)**: Tailored for models specifically fine-tuned for function calls, this agent is compatible with advanced models like `gpt-3.5-turbo` and `gpt-4`. - **[Conversational Agent](https://python.langchain.com/docs/modules/agents/agent_types/chat_conversation_agent)**: Focuses on conversational contexts, utilizing ReAct for tool selection and incorporating memory to recall past interactions. - **[Self-Ask with Search Agent](https://python.langchain.com/docs/modules/agents/agent_types/self_ask_with_search)**: Centering around the ""Intermediate Answer"" tool, it excels at finding factual answers to queries, mirroring the approach in the original *[self-ask with](https://ofir.io/self-ask.pdf) search* study. - **[ReAct Document Store Agent](https://python.langchain.com/docs/modules/agents/agent_types/react_docstore)**: This agent requires ""Search"" and ""Lookup"" tools, simultaneously using both features while providing a stream of thoughts. LangChain’s agents essentially provide the ‘reasoning’ behind the choice of the to-do action, deciding whether to involve multiple tools, just one, or none at all in the process.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320365-langchain-overview-agents-tools-and-opengpt-introduction 95,"LangChain Overview Agents, Tools, and OpenGPT Intr","# LangChain Overview: Agents, Tools, and OpenGPT Introduction ## **Plan-and-Execute Agents** The [plan-and-execute agents](https://blog.langchain.dev/plan-and-execute-agents/) first make a plan with multiple actions and then execute each action sequentially. They are more suited for complex or long-running tasks as they maintain focus on long-term objectives and focus. However, they may lead to more latency.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320365-langchain-overview-agents-tools-and-opengpt-introduction 96,"LangChain Overview Agents, Tools, and OpenGPT Intr","# LangChain Overview: Agents, Tools, and OpenGPT Introduction ## Available tools and custom tools You can find the list of tools that integrate LangChain with other tools [here](https://python.langchain.com/docs/integrations/toolkits/). Some examples are: - [The Python tool](https://python.langchain.com/docs/integrations/toolkits/python): It’s used to execute generated Python code to answer a question. - [The JSON tool](https://python.langchain.com/docs/integrations/toolkits/json): It’s used when interacting with a JSON blob that is too large to fit in the LLM context window. - [The CSV tool](https://python.langchain.com/docs/integrations/toolkits/csv): It’s used to interact with CSV files. [Custom tools](https://python.langchain.com/docs/modules/agents/tools/custom_tools) extend the capabilities of agents, making them adaptable to a wide range of specialized tasks and interactions. Custom tools provide task-specific functionality and flexibility for precise behaviors tailored to unique use cases. The level of customization depends on the creation of advanced interactions, where tools can be orchestrated to perform complex behaviors, such as generating questions, searching the web for answers, and summarizing the gathered information.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320365-langchain-overview-agents-tools-and-opengpt-introduction 97,"LangChain Overview Agents, Tools, and OpenGPT Intr","# LangChain Overview: Agents, Tools, and OpenGPT Introduction ## ****LangChain OpenGPT**** LangChain OpenGPT is an open-source effort to create an experience similar to OpenAI’s assistants and GPTs. In contrast to OpenAI GPTs, LangChain OpenGPT allows you to configure the LLM used, the tools, the vector DB, the retrieval algorithm, and the chat history DB. Let’s see how to use them: 1. **Clone the Repository**: To interact with Langchain's OpenGPTs, follow these steps detailed in their [GitHub repository](https://github.com/langchain-ai/opengpts). The easiest way to launch OpenGPTs locally is through [Docker](https://www.docker.com/) and “[docker compose](https://docs.docker.com/compose/)”. First, clone the repo locally and `cd` into it. ```bash git clone https://github.com/langchain-ai/opengpts.git cd opengpts ``` You should now see a `.env` file with the following content. ```bash OPENAI_API_KEY=placeholder ANTHROPIC_API_KEY=placeholder YDC_API_KEY=placeholder TAVILY_API_KEY=placeholder AZURE_OPENAI_DEPLOYMENT_NAME=placeholder AZURE_OPENAI_API_KEY=placeholder AZURE_OPENAI_API_BASE=placeholder AZURE_OPENAI_API_VERSION=placeholder ``` By default, the app will be using the OpenAI models, so replace the `placeholder` of `OPENAI_API_KEY` with your OpenAI key. You can now launch everything with the following command. ```bash docker compose up ``` Now, visit [http://localhost:8100/](http://localhost:8100/); you should see the following page. ![Screenshot 2023-12-15 at 15.25.28.png](LangChain%20Overview%20Agents,%20Tools,%20and%20OpenGPT%20Intr%20d3b8525f4cfd43c097accae103c39486/LangChain%20Overview%20Agents,%20Tools,%20and%20OpenGPT%20Intr%20c3bced6325074bc4be074a75d1686d01/Screenshot_2023-12-15_at_15.25.28.png)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320365-langchain-overview-agents-tools-and-opengpt-introduction 98,"LangChain Overview Agents, Tools, and OpenGPT Intr","# LangChain Overview: Agents, Tools, and OpenGPT Introduction ## ****Creating OpenGPTs**** We are now ready to create our first OpenGPT! To set a custom prompt in LangChain's OpenGPTs, you begin by defining a specific role or persona for the AI, like a 'Career Counselor'. > You guide the models' responses to fit the desired context and ensure that its advice, insights, or recommendations are aligned with the defined role. > This **`System Message`** outlines the OpenGPTs' responsibilities, tone, and the type of interactions it should engage in: ``` You are a Career Counselor. Your role is to provide insightful and personalized guidance as I navigate my professional path. Whether I'm facing career uncertainties, seeking job advancement, or contemplating a career shift, your expertise is aimed at offering constructive, individualized advice that helps me make informed decisions. Our sessions will be a platform for discussing my professional aspirations, skills, and potential barriers. In our interactions, I expect a supportive environment where I can share my professional experiences, goals, and concerns. Your role is to motivate and provide clear, practical strategies that align with my career objectives. By understanding my unique circumstances, you offer tailored advice and plans to aid my professional growth. This collaboration is crucial for my career development, with your guidance being a cornerstone of my journey towards achieving my career goals. ``` This prompt acts as a foundational script, directing the model's behavior to meet the specific needs of your application or service. Click on “New Bot”, select GPT-3.5-Turbo (or another default that you want to use), name the bot “**Career Counselor**” and provide the System Message we created. Click Save, and you are ready to chat!",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320365-langchain-overview-agents-tools-and-opengpt-introduction 99,"LangChain Overview Agents, Tools, and OpenGPT Intr","# LangChain Overview: Agents, Tools, and OpenGPT Introduction ## Conclusion In this lesson, we covered LangChain’s tools and agents, their applications, and the creation of LangChain OpenGPTs, an open-source initiative for AI assistants. We explored agent types, including Zero-shot ReAct and Conversational Agents, along with plan-and-execute agents. We discussed the integration of various tools like Python and CSV and the customization of these tools. Finally, we set up and used LangChain OpenGPTs, emphasizing their reconfigurability and role in facilitating AI interactions. RESOURCES: - OpenGPTs [OpenGPTs](https://opengpts-example-vz4y4ooboq-uc.a.run.app/) [Maximizing LangChain Efficiency: Agents and ReAct Method Review.](https://medium.com/towards-artificial-intelligence/maximizing-langchain-efficiency-agents-and-react-method-review-3abcf8a5c874) [GitHub - langchain-ai/opengpts](https://github.com/langchain-ai/opengpts) - tools docs [Defining Custom Tools | 🦜️🔗 Langchain](https://python.langchain.com/docs/modules/agents/tools/custom_tools)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320365-langchain-overview-agents-tools-and-opengpt-introduction 100,"Deep Memory for RAG Applications Across Legal, Fin","# Deep Memory for RAG Applications Across Legal, Financial, and Biomedical Industries Build a RAG System with LLM and Deep Memory. Use three different datasets and test the quality of response with and without the Deep Memory feature to improve retrieval! Traditionally, retrieval methods have relied on standard techniques like RAG or query-based document retrieval. However, often the results are not fully satisfying as the retrieved chunks of text are not exactly what we expect. To solve this problem, many techniques are tested every day. In this guide, we will explore how to enhance the storage of indices in the retrieval pipeline in a more effective way through finetuning techniques. Unlike other techniques, Deep Memory enables an automatic and convenient finetuning of the retrieval step on the chunks of data provided, thus improving the solution when compared to a classic and agnostic RAG application. Deep Memory emerges as a fundamental solution in addressing the critical need for accurate retrieval in generating high-quality results. It is crucial to increase the accuracy of Deep Lake’s vector search by up to 22%, achieved through learning an index from labeled queries tailored to specific applications. Importantly, this improvement is achieved without compromising research time, demonstrating the efficacy of Deep Memory in fine-tuning the retrieval process. The enterprise application landscape, particularly the development of “chat with your data” solutions, highlights the importance of accurate retrieval. Current practices involve the integration of Retrieval Augmented Generation (RAG) systems with Large Language Models (LLMs) like GPT-4. The role of Deep Memory in significantly improving the accuracy of vector search becomes fundamental in this context, as it offers a potential solution to increase the reliability of these applications. By emphasizing the importance of accurate retrieval, the integration of technologies like Deep Memory becomes a focal point in achieving the desired level of consistency and precision in generating results. Let’s build a Deep Memory RAG application! We need three main components: A **dataset** containing the text chunks we want to retrieve, an **LLM model** to generate the text embeddings and the **Deep Memory** library. Let's delve into the details.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342460-deep-memory-with-langchain-rag-applications-across-legal-financial-and-biomedical-industries 101,"Deep Memory for RAG Applications Across Legal, Fin","# Deep Memory for RAG Applications Across Legal, Financial, and Biomedical Industries ## Preparing the Dataset In this guide, we have prepared three different datasets that could be downloaded and tested with and without the Deep Memory feature. To find out more about them, you can follow the links: 1. **Finance**: we chose the FinQA Dataset that contains text explaining the economy, acquisitions, etc. It also is a QA dataset, making it easier for us to embed as we already have questions and related answers without generating them! This work focuses on answering deep questions about financial data, aiming to automate the analysis of a large corpus of financial documents. In contrast to existing tasks in the general domain, the finance domain includes complex numerical reasoning and an understanding of heterogeneous representations. Source: [https://github.com/czyssrs/FinQA](https://github.com/czyssrs/FinQA) 2. **Legal**: the Legalbench Dataset contains questions and answers about legal subjects like a company's legal rights, policies, and such. That is a very tedious and specific topic that is not readable by everyone, so retrieving the right information for this task is very welcome! LegalBench tasks span multiple types (binary classification, multi-class classification, extraction, generation, entailment), multiple types of text (statutes, judicial opinions, contracts, etc.), and multiple areas of law (evidence, contracts, civil procedure, etc.). It is a benchmark consisting of different legal reasoning tasks. Source: [https://huggingface.co/datasets/nguha/legalbench?clone=true](https://huggingface.co/datasets/nguha/legalbench?clone=true) 3. **Biomedical**: To address a biomedical topic we chose the Cord19 Dataset, which is about Covid. As it is a very discussed topic, retrieving every possible information is crucial and so we wanted to test it. CORD-19 is a corpus of academic papers about COVID-19 and related coronavirus research. It's curated and maintained by the Semantic Scholar team at the Allen Institute for AI to support text mining and NLP research. Source: [https://github.com/allenai/cord19](https://github.com/allenai/cord19) These three datasets are in the Activeloop organization space, so you need to load them in a Tensor Database format to be able to take advantage of the Deep Memory functionality. In the code above the variable `user_hub` will be equal to the Organization name, in our case `""activeloop""`, and `name_db` will be the ``. ```python def load_vector_store(user_hub, name_db): vector_store_db = DeepLakeVectorStore( f""hub://{user_hub}/{name_db}"", embedding_function=embeddings_function.embed_documents, runtime={""tensor_db"": True}, ) return vector_store_db ``` The datasets were created with a preprocessing consisting of 3 different steps: - **Gather** the data - **Divide** the data into **chunks** - **Create sample questions** The last point listed must be applied for every chunk. Specifically, it creates a relevance score that represents how relevant the question is when compared to the chunk of text (this is necessary for the most critical part: the Deep Memory Finetuning). ### Gather the data The simplest solution is to download a QA Dataset on whatever is our topic of interest so that it possesses everything we need. We can easily do it with the following command: ```bash wget ``` But what if our data is just a long text? ### Chunk Generation To generate the chunks, we can use libraries like **Langchain**, which provide methods to divide our text into chunks automatically. Below are some examples of the generated chunks: - Legal: ![chunk_legal.webp](Deep%20Memory%20for%20RAG%20Applications%20Across%20Legal,%20Fin%202dfbaa4e47b84ce7a82b1996be45d0ef/chunk_legal.webp) - Biomedical: ![chunk_biomedical.webp](Deep%20Memory%20for%20RAG%20Applications%20Across%20Legal,%20Fin%202dfbaa4e47b84ce7a82b1996be45d0ef/chunk_biomedical.webp) - Finance: ![chunk_finance.webp](Deep%20Memory%20for%20RAG%20Applications%20Across%20Legal,%20Fin%202dfbaa4e47b84ce7a82b1996be45d0ef/chunk_finance.webp) There are multiple strategies to do this step most efficiently. For instance, we can use “.” as a separator character, or define the length of each chunk as a standard, or a combination of those! We suggest creating chunks that are not too short and overlapping them to keep relevant information intact. The main disadvantage of not overlapping chunks is the potential loss of information; depending on the nature of the data and the requirements of the analysis or modeling task, it's often beneficial to experiment with overlapping chunks. ```python from langchain.text_splitter import",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342460-deep-memory-with-langchain-rag-applications-across-legal-financial-and-biomedical-industries 102,"Deep Memory for RAG Applications Across Legal, Fin","# Deep Memory for RAG Applications Across Legal, Financial, and Biomedical Industries ## Preparing the Dataset RecursiveCharacterTextSplitter # Divide text in chunks def create_chunks(context, chunk_size=300, chunk_overlap=50): # Initialize the text splitter with custom parameters custom_text_splitter = RecursiveCharacterTextSplitter( # Set custom chunk size chunk_size = chunk_size, chunk_overlap = chunk_overlap, # Use length of the text as the size measure length_function = len, ) chunks = custom_text_splitter.split_text(context) return chunks ``` ### Questions and Relevance Generation This is the most subtle step: how do we generate information like questions or scores without a properly trained model? The good (not so old) LLMs come to the rescue to accomplish this task! We can use prompt engineering on a LLM to generate a **question** for each chunk and at the same time generate the **relevance score** as a **classification task**. We just need to call the model and then parse the text output to obtain the necessary data. To do this we construct a dataset of questions and relevance. Relevance is a set of pairs ([corpus.id](http://corpus.id/): str, significance: str) that provides information where the answer is inside the corpus. Sometimes an answer can be found in multiple locations or have different significance. Relevance enables Deep Memory training to optimize the embedding space for higher accuracy. ![dataset_creation.webp](Deep%20Memory%20for%20RAG%20Applications%20Across%20Legal,%20Fin%202dfbaa4e47b84ce7a82b1996be45d0ef/dataset_creation.webp) An example of how to generate questions and relevance scores is the following: ```python #Sample Prompt message to generate Question and Answer for the provided context system_message = """""" Generate a question related to the context and provide a relevance score on a scale of 0 to 1, where 0 is not relevant at all and 1 is highly relevant. The input is provided in the following format: Context: [The context that for the generated question] The output is in the following format: #Question#: [Text of the question] #Relevance#: [score number between 0 and 1] The context is: {context} """""" def get_chunk_qa_data(context): # Generate the Question and Relevance Text with LLM llm = OpenAI(temperature=0) llm_chain = LLMChain(llm=llm, prompt=PromptTemplate.from_template(system_message)) output = llm_chain(context) # CHECK THE RELEVANCE STRING IN THE OUTPUT check_relevance = None relevance_strings = [""#Relevance#: "", ""Relevance#: "", ""Relevance: "", ""Relevance""] for rel_str in relevance_strings: if rel_str in output[""text""]: check_relevance = rel_str break if check_relevance is None: raise ValueError(""Relevance not found in the output"") messages = output[""text""].split(check_relevance) relevance = messages[1] # CHECK THE QUESTION STRING IN THE OUTPUT question = None question_strings = [""#Question#: "", ""Question#: "", ""Question: "", ""Question""] for qst_str in question_strings: if qst_str in messages[0]: question = messages[0].split(qst_str)[1] break if question is None: raise ValueError(""Question not found in the output"") return question, relevance ``` Now we are all set for the fun part!",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342460-deep-memory-with-langchain-rag-applications-across-legal-financial-and-biomedical-industries 103,"Deep Memory for RAG Applications Across Legal, Fin","# Deep Memory for RAG Applications Across Legal, Financial, and Biomedical Industries ## Deep Memory Tool Now we dive into the core of this tutorial: **Deep Memory**. **Deep Memory** is one of the tools included into the High Performance Features of **Deep Lake**. It is very effective in improving the retrieval accuracy of an LLM model by optimizing your vector store for your use-case, **enhancing the performance** of you overall LLM app. This is possible by performing a **fine-tuning of the embeddings** of your embedding model using your own Dataset enriched by QA additional information that consists in **Questions** and **Relevance scores** (how much the question is related to the text). ![https://images.ctfassets.net/qtqp2awm2ktd/49raxwopyyZ4X8Bga1XwI0/12a59c6e56bd1edec954bad3ab4278d5/Untitled__1_.png](https://images.ctfassets.net/qtqp2awm2ktd/49raxwopyyZ4X8Bga1XwI0/12a59c6e56bd1edec954bad3ab4278d5/Untitled__1_.png) ### Creating the Deep Memory Vector Store Deep Memory leads an index from labeled queries tailored to your Dataset, without impacting search time. These results can be achieved with only a few hundred example pairs of prompt embeddings and the most relevant answers from the vector store. As we can see, Deep Memory uses the dataset text (corpus) along with the questions (queries) and relevance score we generated to train an enhanced retrieval model that can be used without any other modification and adds no latency while boosting the retrieval quality. Deep Memory increases retrieval accuracy without altering your existing workflow. ```python def load_vector_store(user_hub, name_db): vector_store_db = DeepLakeVectorStore( f""hub://{user_hub}/{name_db}"", embedding_function=embeddings_function.embed_documents, runtime={""tensor_db"": True}, ) return vector_store_db ``` In order to create a Deep Memory Dataset, we just need 2 things: the chunks of text 1. An **embedding function** to generate the text embeddings 2. The metadata we prepared (**questions and relevance scores**) For the first point, embeddings can still be computed using a model of your choice such as Open AI ada-002 or other OSS models BGE by BAAI. Furthermore, search results from Deep Memory can be further improved by combining them with lexical search or reranker. For the second point, we explained how to generate questions and relevance scores in the previous paragraphs. If you want to go deeper into details, there are some questions and relevance computed during the training phase: ***Legal Dataset:*** - Chunk: ""*Confidential Information means all confidential information relating to the Purpose which the Disclosing Party or any of its Affiliates, discloses or makes available, to the Receiving Party or any of its Affiliates, before, on or after the Effective Date. This includes the fact that discussions and negotiations are taking place concerning the Purpose and the status of those discussions and negotiations.*"" - Question: What is the definition of Confidential Information? ***Biomedical Dataset:*** - Chunk: ""*The P2 64 and P3 regions encode the non-structural proteins 2B and 2C and 3A, 3B (1-3) (VPg), 3C pro and 4 structural protein-coding regions is replaced by reporter genes, allow the study of genome 68 replication without the requirement for high containment.*"" - Question: What are the non-structural proteins encoded by the P2 64 and P3 regions? ***Finance Dataset:*** - Chunk: ""*the deferred fuel cost revisions variance resulted from a revised unbilled sales pricing estimate made in december 2002 and a further revision made in the first quarter of 2003 to more closely align the fuel component of that pricing with expected recoverable fuel costs . the asset retirement obligation variance was due to the implementation of sfas 143 , ""accounting for asset retirement obligations"" adopted in january 2003 . see ""critical accounting estimates"" for more details on sfas 143 . the increase was offset by decommissioning expense and had no effect on net income . the volume variance was due to a decrease in electricity usage in the service territory . billed usage decreased 1868 gwh in the industrial sector including the loss of a large industrial customer to cogeneration.*"" - Question:",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342460-deep-memory-with-langchain-rag-applications-across-legal-financial-and-biomedical-industries 104,"Deep Memory for RAG Applications Across Legal, Fin","# Deep Memory for RAG Applications Across Legal, Financial, and Biomedical Industries ## Deep Memory Tool What was the impact of the asset retirement obligation variance on net income? Providing those inputs to the Vector Store, we can upload the Dataset we created to the Active Loop Dataset Repository. After this step, the Deep Memory feature will do 2 things automatically: 1. Generate the embeddings the embedding model we defined 2. Finetune the indices using the Deep Memory feature And we are all set to try our improved RAG applications! But how to try it out now? ### Deep Memory Search After creating the Deep Memory Dataset, we can search for the right piece of text for our question using the following code: ```python def get_answer(vector_store_db, user_question, deep_memory): # deep memory inside the vectore store ==> deep_memory=True answer = vector_store_db.search( embedding_data=user_question, embedding_function=embeddings_function.embed_query, deep_memory=deep_memory, return_view=False, ) return answer ```",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342460-deep-memory-with-langchain-rag-applications-across-legal-financial-and-biomedical-industries 105,"Deep Memory for RAG Applications Across Legal, Fin","# Deep Memory for RAG Applications Across Legal, Financial, and Biomedical Industries ## Developing a Deep Memory Search with Gradio We created a Gradio application to test our application more easily. ![Untitled](Deep%20Memory%20for%20RAG%20Applications%20Across%20Legal,%20Fin%202dfbaa4e47b84ce7a82b1996be45d0ef/Untitled.png) The interface allows us to select the dataset we want to test, write a question, and instantly generate the answer. We can also compare the response returned by the Deep Memory model with that returned by the model without.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342460-deep-memory-with-langchain-rag-applications-across-legal-financial-and-biomedical-industries 106,"Deep Memory for RAG Applications Across Legal, Fin","# Deep Memory for RAG Applications Across Legal, Financial, and Biomedical Industries ## Classic RAG vs Deep Memory To test out the improvements of the Deep Memory step, we prepared and shared 3 datasets we mentioned earlier: **Legal, Medical, Finance**. In the output windows, you can see the benefits of this amazing tool when compared to more classical approaches. If you want to try these models, we suggest you to try one of these questions: - Legal Dataset: - What are the provisions of this Agreement regarding the disclosure of Confidential Information to third parties? - Biomedical Dataset: - What are the advantages of using the new package to visualize data? - Finance Dataset: - What were the primary factors that contributed to the improvement in net cash provided by operating activities during 2015? The following example shows how the model with deep memory and dataset finance is more efficient in the response generated: Deep Memory model: ``` The provisions of this Agreement state that disclosure of Confidential Information to third party consultants and professional advisors is allowed, as long as those third parties agree to be bound by this Agreement. Additionally, both parties are required to keep any confidential information they may have access to confidential, unless required by law or necessary to perform their obligations under this Agreement. This includes not only the information itself, but also the terms of the Agreement and the fact that the parties are considering a business arrangement. ``` Non Deep Memory model: ``` The provisions of this Agreement state that disclosure of Confidential Information to third party consultants and professional advisors is allowed, as long as those third parties agree to be bound by this Agreement. Additionally, the Confidential Information includes the terms of this agreement, the fact that the information is being made available, and the possibility of a business arrangement between the parties. ```",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342460-deep-memory-with-langchain-rag-applications-across-legal-financial-and-biomedical-industries 107,"Deep Memory for RAG Applications Across Legal, Fin","# Deep Memory for RAG Applications Across Legal, Financial, and Biomedical Industries ## Evaluation Metrics After testing out our datasets, we can see that the Deep Memory contribution is visible in retrieving more suitable information to the query provided as a question by the user. The following metrics show how the **Deep Memory feature** can improve performance: **Legal Dataset:** ``` ---- Evaluating without Deep Memory ---- Recall@1: 12.0% Recall@3: 37.0% Recall@5: 47.0% Recall@10: 57.0% Recall@50: 87.0% Recall@100: 94.0% ---- Evaluating with Deep Memory ---- Recall@1: 19.0% Recall@3: 56.0% Recall@5: 66.0% Recall@10: 79.0% Recall@50: 88.0% Recall@100: 95.0% ``` **Biomedical Dataset:** ``` ---- Evaluating without Deep Memory ---- Recall@1: 59.0% Recall@3: 75.0% Recall@5: 78.0% Recall@10: 81.0% Recall@50: 91.0% Recall@100: 94.0% ---- Evaluating with Deep Memory ---- Recall@1: 69.0% Recall@3: 81.0% Recall@5: 83.0% Recall@10: 86.0% Recall@50: 97.0% Recall@100: 98.0% ``` **Financial Dataset:** ``` ---- Evaluating without Deep Memory ---- Recall@1: 18.0% Recall@3: 51.0% Recall@5: 65.0% Recall@10: 71.0% Recall@50: 98.0% Recall@100: 99.0% ---- Evaluating with Deep Memory ---- Recall@1: 26.0% Recall@3: 66.0% Recall@5: 75.0% Recall@10: 81.0% Recall@50: 99.0% Recall@100: 99.0% ``` In conclusion, it is crucial to recognize that in NLP, success is not only determined by the richness of the data, but also depends on the effectiveness of the retrieval strategy. Although collecting large and diverse datasets is undeniably valuable, how the information is retrieved and presented plays a key role in optimizing model performance. As we have seen in this brief guide, there are tools such as Deep Memory that allow us to be more accurate and efficient and thus generate more relevant answers.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51342460-deep-memory-with-langchain-rag-applications-across-legal-financial-and-biomedical-industries 108,Module 2 Introduction - Advanced Retrieval Augment,"# Module 2 Introduction - Advanced Retrieval Augmented Generation The ""Advanced Retrieval Augmented Generation"" module offers an in-depth exploration into optimizing large language models (LLMs) with advanced Retrieval-Augmented Generation (RAG) techniques. Across four lessons, it encompasses a range of topics including query transformation, re-ranking, optimization techniques like fine-tuning, the implementation of Activeloop's Deep Memory, and other advanced strategies using LlamaIndex. Students will gain practical experience in enhancing RAG system performance, from query refinement to production deployment and iterative optimization. The module is designed to provide a comprehensive understanding of building, refining, and deploying efficient RAG systems, integrating hands-on examples with theoretical knowledge to prepare students for real-world applications.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352087-module-2-introduction-advanced-retrieval-augmented-generation 109,Module 2 Introduction - Advanced Retrieval Augment,"# Module 2 Introduction - Advanced Retrieval Augmented Generation ## Fine-tuning vs RAG; Introduction to Activeloop’s Deep Memory; In this lesson, students will explore various optimization techniques to maximize the performance of large language models (LLMs), such as prompt engineering, fine-tuning, and retrieval-augmented generation (RAG). The lesson begins with identifying the benefits and challenges of each method. It further examines the limitations of RAG systems and introduces Activeloop's Deep Memory as a solution to these challenges, particularly in improving retrieval precision for user queries. Students will see a step-by-step guide on how to implement Deep Memory in experimental workflows, including creating a synthetic training dataset and running inference with the trained Deep Memory model. A significant portion of the lesson is dedicated to hands-on examples using code to demonstrate the increased recall rates when Deep Memory is applied in a RAG system. The lesson concludes with a comparison of empirical data, highlighting the advantages of Deep Memory over traditional methods and emphasizing its role in advancing the capabilities of LLMs.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352087-module-2-introduction-advanced-retrieval-augmented-generation 110,Module 2 Introduction - Advanced Retrieval Augment,"# Module 2 Introduction - Advanced Retrieval Augmented Generation ## **Mastering Advanced RAG Techniques with LlamaIndex** In this lesson, students will learn about the advanced techniques and strategies that enhance the performance of Retrieval-Augmented Generation (RAG) systems, using LlamaIndex as a framework. They will explore the concepts of query construction, query expansion, and query transformation to refine the information retrieval process. Students will also be introduced to advanced strategies like reranking with Cohere Reranker, recursive retrieval, and small-to-big retrieval to further improve the quality and relevance of search results. The lesson includes hands-on examples of setting up a query engine from indexing to querying, as well as creating custom retrievers and utilizing reranking. The conclusion underlines the importance of these techniques and strategies in developing more efficient RAG-based applications.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352087-module-2-introduction-advanced-retrieval-augmented-generation 111,Module 2 Introduction - Advanced Retrieval Augment,"# Module 2 Introduction - Advanced Retrieval Augmented Generation ## Production-Ready RAG Solutions with LlamaIndex In this lesson, students will learn about the challenges, optimization strategies, and best practices for Retrieval-Augmented Generation (RAG) systems in production. The discussion includes dealing with dynamic data management, diverse representation in latent space, regulatory compliance, and model selections for system efficiency. The lesson emphasizes the importance of fine-tuning both the embedding models and the Language Large Models (LLMs) to improve retrieval metrics and generate more accurate responses. Additionally, students will explore the role of Intel® technologies in optimizing neural network models on CPUs, and they will acquire knowledge on utilizing generative feedback loops, hybrid searches, and the continuous evaluation of RAG system performance. Practical use cases, data management tools, and integration of metadata in retrieval steps are also highlighted, with LlamaIndex being presented as a comprehensive framework for building data-driven LLM applications.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352087-module-2-introduction-advanced-retrieval-augmented-generation 112,Module 2 Introduction - Advanced Retrieval Augment,"# Module 2 Introduction - Advanced Retrieval Augmented Generation ## Iterative Optimization of LlamaIndex RAG Pipeline: A Step-by-Step Approach In this lesson, you will learn the process of iteratively optimizing a LlamaIndex Retrieval-Augmented Generation (RAG) pipeline to enhance its performance in information retrieval and generating relevant answers. The lesson guides you through establishing a baseline pipeline, experimenting with retrieval values and embedding models like ""text-embedding-ada-002"" and ""cohere/embed-english-v3.0,"" and incorporating techniques like reranking and deep memory to refine document selection. Additionally, you will learn about performance metrics, such as Hit Rate and Mean Reciprocal Rank (MRR), and evaluate faithfulness and relevancy of answers using GPT-4 as a judge. The lesson provides hands-on code examples for each optimization step and concludes with the overall enhancement observed in the RAG pipeline's accuracy.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352087-module-2-introduction-advanced-retrieval-augmented-generation 113,Module 2 Introduction - Advanced Retrieval Augment,"# Module 2 Introduction - Advanced Retrieval Augmented Generation ## Use Deep Memory to Boost RAG Apps' Accuracy by up to +22% In this lesson, you will be introduced to a practical example of Deep Memory. Students will learn about the limitations of current RAG systems, such as suboptimal retrieval accuracy, and explore the benefits of implementing Deep Memory. The lesson explains how Deep Memory provides a significant accuracy boost by optimizing the vector search process using a tailored index from labeled queries. Throughout the lesson, students will be guided through hands-on examples for adopting Deep Memory in their applications, including data loading, creating a relevance dataset, training, and evaluating methods. The lesson emphasizes the practical advantages of this approach, like higher retrieval quality, cost savings from reduced context size needs and compatibility with existing workflows.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352087-module-2-introduction-advanced-retrieval-augmented-generation 114,Module 2 Introduction - Advanced Retrieval Augment,"# Module 2 Introduction - Advanced Retrieval Augmented Generation ## **How to Use Deep Memory with LlamaIndex to Get +15% RAG hit_rate Improvement for Question Answering on Docs?** In this comprehensive tutorial, students will learn about improving the hit rate of Retrieval-Augmented Generators (RAGs) when answering questions from documentation by up to 15% or more using Activeloop's Deep Memory. The lesson covers dataset creation and ingestion using BeautifulSoup and LlamaIndex, training deep memory with synthetic queries, evaluating the performance improvement, and leveraging deep memory for real-world inference. By integrating a small neural network layer into the retrieval process, the tutorial demonstrates how to precisely match user queries with relevant data, significantly boosting the accuracy of returned information while maintaining minimal search latency. Students will get hands-on experience with Python libraries and AI models such as OpenAI's GPT-4 and vector store operations to create a more efficient and accurate RAG system.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352087-module-2-introduction-advanced-retrieval-augmented-generation 115,Module 2 Introduction - Advanced Retrieval Augment,"# Module 2 Introduction - Advanced Retrieval Augmented Generation ## **Use Deep Memory with LangChain to Get Up to +22% Increase in Accurate Questions Answers to LangChain Code DB** In this lesson, students will learn how to utilize Activeloop Deep Memory with Langchain to enhance the efficiency and accuracy of Retrieval-Augmented Generation (RAG) systems by parsing documentation, creating datasets, generating synthetic queries, training a retrieval model, evaluating performance, and ultimately integrating Deep Memory into RAG-powered Language Learning Model (LLM) applications. They'll be guided through the practical steps involved in setting up this system, including library installation, data scraping and transformation, model training and evaluation, and even cost-saving measures, all while focusing on the balance between recall, cost, and latency in AI retrieval tasks.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352087-module-2-introduction-advanced-retrieval-augmented-generation 116,Module 2 Introduction - Advanced Retrieval Augment,"# Module 2 Introduction - Advanced Retrieval Augmented Generation ## Deep Memory for RAG Applications Across Legal, Financial, and Biomedical Industries In this comprehensive lesson, students will learn how to enhance RAG systems using Deep Memory in conjunction with LLMs for applications within legal, financial, and biomedical sectors. Students will be guided through the process of preparing datasets, including gathering and chunking data as well as question and relevance score generation using LLMs. The lesson emphasizes the significant performance improvements offered by Deep Memory, such as an increase in retrieval accuracy without compromising search time, and demonstrates how to integrate and test this feature with real datasets—Legalbench, FinQA, and CORD-19. Additionally, students will gain insight into the practical implementation of Deep Memory through code examples and explore the advantages of Deep Memory over classic retrieval methods.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352087-module-2-introduction-advanced-retrieval-augmented-generation 117,Module 1 Introduction - Basics of Retrieval Augmen,"# Module 1 Introduction - Basics of Retrieval Augmented Generation with Langchain and LlamaIndex This module covers the basic concepts of Langchain and Llamaindex and prepares you to build a basic RAG application with both frameworks and help you decide which tool to select for your use case (spoiler: they both have their utility!). This is a recap of the Langchain concepts covered in the earlier **[LangChain & Vector Databases in Production](https://learn.activeloop.ai/courses/langchain)** course, together with a brief introduction to the Llamaindex framework. It also contains a clear summary of the different strengths and focus of each framework. This course will be focussed on advanced RAG topics so we recommend taking our earlier course and reading examples from LangChain and Llamaindex documentation to complement this module.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352381-module-1-introduction-basics-of-retrieval-augmented-generation-with-langchain-and-llamaindex 118,Module 1 Introduction - Basics of Retrieval Augmen,"# Module 1 Introduction - Basics of Retrieval Augmented Generation with Langchain and LlamaIndex ## LangChain: ****Basic Concepts Recap**** In this lesson, students will recap on the functionalities and components of LangChain, a framework designed to work with generative AI and LLM-based applications. This material was covered in depth with code and project examples in our earlier course **[LangChain & Vector Databases in Production](https://learn.activeloop.ai/courses/langchain).** Students will be introduced to crucial preprocessing techniques like document loading and chunking, understanding the indexing of document segments and embedding models, as well as the structure and functionality of chains, memory modules, and vector stores. Additionally, students will gain insight into working with chat models, LLMs, embedding models, and constructing sequential chains to automate complex interactions.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352381-module-1-introduction-basics-of-retrieval-augmented-generation-with-langchain-and-llamaindex 119,Module 1 Introduction - Basics of Retrieval Augmen,"# Module 1 Introduction - Basics of Retrieval Augmented Generation with Langchain and LlamaIndex ## **LlamaIndex Introduction: Precision and Simplicity in Information Retrieval** In this lesson, students will learn about the LlamaIndex framework, designed to enhance the capabilities of Large Language Models by integrating them with Retrieval-Augmented Generation (RAG) systems. The framework allows LLM-based applications to fetch accurate and relevant information using vector stores, connectors, nodes, and index types for better-informed responses. The lesson covers vector stores and their importance in semantic search, the role of data connectors and LlamaHub in data ingestion, the creation of node objects from documents, and the indexing of data for quick retrieval. Students will also learn about the practical construction and usage of query engines, routers, and the distinction between saving indexes locally and on the cloud. Finally, it compares LlamaIndex with the LangChain frameworks and concludes by discussing the practical application and effectiveness of LlamaIndex in LLM applications.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352381-module-1-introduction-basics-of-retrieval-augmented-generation-with-langchain-and-llamaindex 120,Module 1 Introduction - Basics of Retrieval Augmen,"# Module 1 Introduction - Basics of Retrieval Augmented Generation with Langchain and LlamaIndex ## Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories In this project lesson, students will learn how to use LlamaIndex in conjunction with Activeloop Deep Lake to index GitHub repositories, enabling interaction with codebases through natural language queries. They will understand both tools' core functionalities, the synergy between data structuring and optimized storage, and the setup process for integrating these technologies. The lesson will guide students through installing necessary packages, setting up a Python virtual environment, loading and parsing GitHub repository data, building an index, and querying this index using a combination of LlamaIndex and Deep Lake. Additionally, the lesson covers the customization and flexibility of LlamaIndex's API for tailored data retrieval and response synthesis, and it concludes by comparing LlamaIndex with LangChain for building chatbots with external data.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352381-module-1-introduction-basics-of-retrieval-augmented-generation-with-langchain-and-llamaindex 121,Crafting AI Assistants via OpenAI and Hugging Face,"# Introduction In this lesson, we will explore the Assistant APIs from OpenAI. We will learn about the primary features of the Assistants API, including the Code Interpreter, Knowledge Retrieval, and Function Calling capabilities. We share a hands-on example to demonstrate the integration of the Code Interpreter with an existing Assistant. The example will show how to enhance an Assistant's ability to provide technical solutions by executing Python code, thus reducing LLM “hallucinations.” We will also introduce other advanced technologies from OpenAI, such as Whisper, Dalle-3, Speech to Text, and the GPT-4 vision API. These tools are useful for anyone looking to develop sophisticated AI assistants using a variety of APIs. Then, we will learn how to use the free Hugging Face Inference API to get access to the thousands of models hosted on their platform. By the end of this lesson, you will have gained a solid understanding of how to apply these technologies in your AI projects effectively.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320367-crafting-ai-assistants-via-openai-and-hugging-face-api 122,Crafting AI Assistants via OpenAI and Hugging Face,"# Open AI Assistant’s **Built-in Functionalities** [The OpenAI Assistants API](https://platform.openai.com/docs/assistants/overview) includes three main functionalities: Code Interpreter, Retrieval, and Function Calling. **`[Code Interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter)`**: This functionality allows the Assistant to generate and run Python code in a sandboxed execution environment. The Assistant can use Code Interpreter automatically from your conversation or when you upload a file with data. It's a tool that transforms the LLM into a more accurate computational problem-solver that can handle tasks like solving complex math equations. It can also generate files with data and images of graphs from the same Python code. It's a useful way to trust the output from the assistant and a great tool when analyzing data. **`[Knowledge Retrieval](https://platform.openai.com/docs/assistants/tools/knowledge-retrieval)`**: This is OpenAI’s own [retrieval augmented generation (RAG)](https://arxiv.org/abs/2005.11401) system offered as part of the Assistants API. It allows multiple uploads. Once the files are uploaded and passed to the Assistant, OpenAI will automatically chunk your documents, index them, store the embeddings, and implement vector search to retrieve relevant content to answer user queries. **`[Function Calling](https://platform.openai.com/docs/assistants/tools/function-calling)`**: Function calling allows you to describe functions or tools to the Assistant and have it return the functions that need to be called along with their arguments. It's a powerful way to add new capabilities to your Assistant.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320367-crafting-ai-assistants-via-openai-and-hugging-face-api 123,Crafting AI Assistants via OpenAI and Hugging Face,"# How To Set Up an Assistant You have two distinct pathways depending on your needs and expertise: - **[Assistants Playground](https://platform.openai.com/playground)**: Ideal for those looking to get a feel for the Assistant's capabilities without going into complex integrations. - **[Detailed Integration through the API](https://platform.openai.com/docs/assistants/overview)**: Best suited for those who require a more customized and in-depth setup. **STEP-BY-STEP ASSISTANT CREATION:** 1. **Creating an `Assistant`**: **Purpose**: An Assistant object represents an entity/agent that can be configured to respond to users’ messages in different ways using several parameters. **Model Selection**: you can specify any version of GPT-3.5 or GPT-4 models, including fine-tuned models. OpenAI recommends using its [latest models](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo) with the Assistants API for best results and maximum compatibility with tools. Thus, choose between **`gpt-3.5-turbo-1106`** or **`gpt-4-1106-preview`** models. **Tools:** The Assistant supports the **Code Interpreter** for technical queries that require Python code execution or **Knowledge Retrieval** to augment the Assistant with proprietary external information. 2. **Setting up a `Thread`**: **Role**: A Thread acts as the foundational unit of user interaction. It can be seen as a single **conversation**. Pass any user-specific context and files in this thread by [creating Messages](https://platform.openai.com/docs/api-reference/messages/createMessage). ```python thread = client.beta.threads.create() ``` **Customization**: In Thread, ingest user-specific contexts or attach necessary files so each conversation is unique and personalized. Threads don’t have a size limit. You can add as many messages as you want to a conversation/Thread. The Assistant will ensure that requests to the model fit within the maximum context window, using relevant optimization techniques used in ChatGPT, such as truncation. 3. **Adding a `Message`**: **Definition**: Messages are user inputs, and the Assistant’s answers are appended to a Thread. User inputs can be questions or commands. **Function**: They serve as the primary mode of communication between the user and the Assistant. ```python message = client.beta.threads.messages.create( thread_id=thread.id, role=""user"", content=""I need to solve the equation `3x + 11 = 14`. Can you help me?"" ) ``` Messages can include **text, images, and other files**. Messages are stored as a list on the Thread. Using GPT-4 with Vision is not supported here. You can upload images and have them [processed via retrieval](https://platform.openai.com/docs/assistants/tools/knowledge-retrieval). 4. **Executing with `Run`**: **Activation**: For the Assistant to respond to the user message, you must [create a Run](https://platform.openai.com/docs/api-reference/runs/createRun). The Assistant will then automatically decide what previous Messages to include in the context window for the model. **Process**: The Assistant processes the entire Thread, employs its tools if required, and formulates an appropriate response. During its run, the Assistant can call tools or create Messages. Examining Run Steps allows you to check how the Assistant is getting to its final results. 5. **Displaying the `Response`**: **Outcome**: The assistant’s response to a Run: ```python messages = client.beta.threads.messages.list( thread_id=thread.id ) ``` These responses are displayed to the user! During this Run, the Assistant added two new Messages to the Thread.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320367-crafting-ai-assistants-via-openai-and-hugging-face-api 124,Crafting AI Assistants via OpenAI and Hugging Face,"# How To Set Up an Assistant ## **ASSISTANT’S CORE MECHANISM:** Creating an Assistant only requires specifying the **`model`**. But you can further customize the behavior of the Assistant: 1. Use the **`instructions`** parameter to guide the personality of the Assistant and define its goals. Instructions are similar to system messages in the Chat Completions API. 2. Use the **`tools`** parameter to give the Assistant access to up to 128 tools in parallel. You can give it access to OpenAI-hosted tools (Conde Interpreter, Knowledge Retrieval) or call third-party tools via **`function calling`**. 3. Use the **`file_ids`** parameter to give the tools access to files. Files are uploaded using the **`File`** U[pload endpoint](https://platform.openai.com/docs/api-reference/files/create).",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320367-crafting-ai-assistants-via-openai-and-hugging-face-api 125,Crafting AI Assistants via OpenAI and Hugging Face,"# How To Set Up an Assistant ## **Example demonstration:** Imagine you're developing an AI assistant for a tech company. This assistant needs to provide detailed product support using a comprehensive knowledge base. ```bash mkdir openai-assistants && cd openai-assistants python3 -m venv openai-assistants-env source openai-assistants-env/bin/activate pip3 install python-dotenv pip3 install --upgrade openai # fire up VSCode and let's get rolling! code . ``` Replace the text with your OpenAI API key, which you can get from your [OpenAI developer account](https://platform.openai.com/api-keys). ```python OPENAI_API_KEY=""sh-xxx"" ``` ```bash $ pip install -U -q openai ``` ### **Upload Files to a Knowledge Base**: First, make a folder to store all the files you’ll create. **Upload** a detailed PDF manual of a product line (e.g., ""tech_manual.pdf"") using the API: ```python from openai import OpenAI client = OpenAI() file = client.beta.files.upload( file=open(""tech_manual.pdf"", ""rb""), filetype=""application/pdf"", description=""Tech product manual"" ) ``` Now you can create the assistant with an uploaded file and with the ability to retrieve: `tools=[{""type"": ""retrieval""}]` ```python assistant = client.beta.assistants.create( instructions=""You are a tech support chatbot. Use the product manual to respond accurately to customer inquiries."", model=""gpt-4-1106-preview"", tools=[{""type"": ""retrieval""}], file_ids=[file.id] ) ``` **User Interaction**: To interact with the assistant, you need a **`thread` and a `message`**. The message should contain the customer's question. Here's an example: ```python thread = client.beta.threads.create() message = client.beta.threads.messages.create( thread_id=thread.id, role=""user"", content=""How do I reset my Model X device?"", ) ``` ### RUN T**hread**: - A customer asks, ""*How do I reset my Model X device?*"" The assistant accesses the uploaded manual, performs a vector search to find the relevant section, and provides clear, step-by-step reset instructions. ```python run = client.beta.threads.runs.create( thread_id=thread.id, assistant_id=assistant.id, ) # the run will enter the **queued** state before it continues it’s execution. ``` ### **Information retrieval:** After the run is complete, you can retrieve the assistant's response: ```python messages = client.beta.threads.messages.list( thread_id=thread.id ) assistant_response = messages.data[0].content[0].text.value ``` The output result should contain the assistant's response to the customer's question based on knowledge from the uploaded manual. You can see the full code and more examples in this Colab notebook. [Google Colaboratory](https://colab.research.google.com/drive/1GkjoTZUd2rpwIvxmCZreKo5ZUZN6i7vJ)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320367-crafting-ai-assistants-via-openai-and-hugging-face-api 126,Crafting AI Assistants via OpenAI and Hugging Face,"# OpenAI’s Other Advanced Models OpenAI also offers different types of models that are not yet integrated into the Assistants API but are accessible. These models offer voice processing, image understanding, and image generation capabilities. ### **[Whisper-v3](https://huggingface.co/openai/whisper-large-v3)** Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. It is a transformer-based encoder-decoder model, which is a type of *sequence-to-sequence* model. The latest `large-v3` model shows improved performance over various languages compared to Whisper `large-v2`. OpenAI released the model’s weights with an ****Apache License 2.0.**** The model is available on [Hugging Face](https://huggingface.co/openai/whisper-large-v3). ### **[Text to Speech](https://platform.openai.com/docs/guides/text-to-speech)** TTS is an AI model that converts text to natural-sounding spoken text. They offer two different model variates: **`tts-1`** is optimized for real-time text-to-speech use cases, and **`tts-1-hd`** is optimized for quality. These models can be used with the [Speech endpoint in the Audio API](https://platform.openai.com/docs/guides/text-to-speech). ### **[Dall-E 3](https://openai.com/dall-e-3)** A newer iteration of the DALL-E model is designed for image generation. It can create images based on user prompts, making it a valuable tool for graphic designers, artists, and anyone to generate images quickly and efficiently. You can access the model through the [image generation endpoint](https://platform.openai.com/docs/guides/images/generations). ### **[GPT-4 Vision](https://openai.com/research/gpt-4v-system-card)** GPT-4 with Vision enables you to ask questions about the contents of images. Visual question answering (VQA) is an important computer vision research field. You can also perform other vision tasks, such as Optical Character Recognition (OCR), where a model reads text in an image. Using GPT-4 with Vision, you can ask questions about what is or is not in an image, how objects relate in an image, the spatial relationships between two objects (is one object to the left or right of another), the color of an object, and more. GPT-4V is available through the [OpenAI web interface for ChatGPT Plus](https://openai.com/?ref=blog.roboflow.com) subscribers and through [their API](https://platform.openai.com/docs/guides/vision/vision). This expands the model's utility beyond the traditional text-only inputs, enabling it to be applied in a wider range of contexts. It handles images through the Chat Completions API, but note that the Assistants API does not support GPT-4V at this time. GPT4-V supports advanced use cases like creating image captions, in-depth analysis of visual content, and interpreting text and graphics in documents. ![https://huggingface.co/front/assets/huggingface_logo-noborder.svg](https://huggingface.co/front/assets/huggingface_logo-noborder.svg)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320367-crafting-ai-assistants-via-openai-and-hugging-face-api 127,Crafting AI Assistants via OpenAI and Hugging Face,"# **Hugging Face Inference API** Hugging Face (HF) offers a free service for testing and evaluating over 150,000 publicly available machine learning models hosted on their platform through their [Inference API.](https://huggingface.co/docs/api-inference/index) They provide a wide range of models, including transformer and diffusion-based models, that can help solve various NLP or vision tasks such as text classification, sentiment analysis, named entity recognition, etc. ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320367-crafting-ai-assistants-via-openai-and-hugging-face-api 128,Crafting AI Assistants via OpenAI and Hugging Face,"# **Hugging Face Inference API** ## Steps to use the Inference API**:** 1. [Login](https://huggingface.co/login) to Hugging Face. 2. Navigate to your profile on the top right navigation bar, then click ""Edit profile.” 3. Click on the ""Access Tokens"" menu item. 4. Set the HF HUB API token: ```bash export HUGGINGFACEHUB_API_TOKEN=your-token ``` 1. Use the `HUGGINGFACEHUB_API_TOKEN` as an environment variable ```python import os from huggingface_hub import HfApi hf_api = HfApi(token=os.getenv(""HUGGINGFACEHUB_API_TOKEN"")) ``` 1. Run the Inference API Inference is the process of using a trained model to predict new data. The `huggingface_hub` library provides an easy way to call a service that runs inference for hosted models. As described above, you have two types of services available. - **[Inference API](https://huggingface.co/docs/api-inference/index)**: run accelerated inference on Hugging Face’s infrastructure **for free**. - **[Inference Endpoints](https://huggingface.co/docs/inference-endpoints/index)**: easily deploy models to production **(paid)** 6.1 Choose a model from the [Model Hub](https://huggingface.co/models) The model checkpoints are stored in the Model Hub; you can search and share them. Note that not all models are available on the Inference API. Once the endpoint has been created, you should see a URL endpoint of it like the following: ```python ENDPOINT = https://api-inference.huggingface.co/models/ ``` 1. Run the inference. ```python import requests API_URL = ""https://api-inference.huggingface.co/models/"" headers = {""Authorization"": f""Bearer {API_TOKEN}""} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() data = query(""Can you please let us know more"") ```",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320367-crafting-ai-assistants-via-openai-and-hugging-face-api 129,Crafting AI Assistants via OpenAI and Hugging Face,"# **Hugging Face Inference API** ## Hugging Face Tasks The team at [Hugging Face has categorized](https://huggingface.co/tasks) several models into the different tasks they can solve. You can find models for popular NLP tasks: Question Answering, Sentence Similarity, Summarization, Table Question Answering, and more. Here is another example of using the Inference API for a summarization task. ```python import requests API_TOKEN = 'your_api_token_here' model_name = 'facebook/bart-large-cnn' text_to_summarize = ""Hugging Face's API simplifies accessing powerful NLP models for tasks like summarization, transforming verbose texts into concise, insightful summaries."" endpoint = f'https://api-inference.huggingface.co/models/{model_name}' headers = {'Authorization': f'Bearer {API_TOKEN}'} data = {'inputs': text_to_summarize} response = requests.post(endpoint, headers=headers, json=data) summarized_text = response.json()[0]['summary_text'] print(summarized_text) ``` We used a pre-trained model, **`[facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn)`,** showcasing its ability to produce clear and concise summaries. [Google Colaboratory](https://colab.research.google.com/drive/1GkjoTZUd2rpwIvxmCZreKo5ZUZN6i7vJ?usp=sharing) **Note:** Not all models are available in this Inference API. Verify if the model is available by reviewing its '*Model card*.’ Sentiment analysis task: ```python import requests headers = {""Authorization"": f""Bearer {API_TOKEN}""} API_URL = ""https://api-inference.huggingface.co/models/distilbert-base-uncased-finetuned-sst-2-english"" def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() data = query({""inputs"": ""I love how this app simplifies complex tasks effortlessly . I'm frustrated by the frequent errors in the software's latest update""}) print(data) ``` Text-to-image task: ```bash # run a few installations !pip install diffusers[""torch""] transformers !pip install -U sentence-transformers ``` ```python from diffusers import StableDiffusionPipeline import torch model_id = ""runwayml/stable-diffusion-v1-5"" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to(""cuda"") prompt = ""Create an image of a futuristic cityscape on an alien planet, featuring towering skyscrapers with glowing neon lights, a sky filled with multiple moons, and inhabitants of various alien species walking through vibrant market streets"" image = pipe(prompt).images[0] image.save(""astronaut_rides_horse.png"") ``` Resulting image: ![Image generated with stable diffusion](Crafting%20AI%20Assistants%20via%20OpenAI%20and%20Hugging%20Face%20788f0523a80d442dba0530f5d4f76198/Crafting%20AI%20Assistants%20via%20OpenAI%20and%20Hugging%20Face%20bdc8bb2675504d8f983e56b9e63cc93b/f93209da-87e6-43e3-b21d-f052f0b27e51.png) Image generated with stable diffusion You can also encode a sentence and get text embeddings. ```python from sentence_transformers import SentenceTransformer sentences = [""GAIA's questions are rooted in practical use cases, requiring AI systems to interact with a diverse and uncertain world, reflecting real-world applications."", "" GAIA questions require accurate execution of complex sequences of actions, akin to the Proof of Work concept, where the solution is simple to verify but challenging to generate.""] model = SentenceTransformer('Equall/english-beta-0.3', use_auth_token=API_TOKEN) embeddings = model.encode(sentences) print(embeddings) ``` > [[ 0.76227915 -0.5500489 -1.5719271 ... -0.34034422 -0.27251056 0.12204967] [ 0.29783687 0.6476462 -2.0379746 ... -0.28033397 -1.3997376 0.25214267]] > You can also experiment with image-captioning models: ```python from transformers import pipeline image_to_text = pipeline(""image-to-text"", model=""nlpconnect/vit-gpt2-image-captioning"") image_to_text(""https://ankur3107.github.io/assets/images/image-captioning-example.png"") # [{'generated_text': 'a soccer game with a player jumping to catch the ball '}] ``` You can experiment with classification tasks with image-to-text models pre-trained on ImageNet: ```python from transformers import ViTImageProcessor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = ViTImageProcessor.from_pretrained('google/vit-base-patch16-224') model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224') inputs = processor(images=image, return_tensors=""pt"") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print(""Predicted class:"", model.config.id2label[predicted_class_idx]) ``` preprocessor_config.json: 100% 160/160 [00:00<00:00, 10.5kB/s] config.json: 100% 69.7k/69.7k [00:00<00:00, 3.60MB/s] model.safetensors: 100% 346M/346M [00:02<00:00, 162MB/s] ``` Predicted class: Egyptian cat ``` Here, we scrape a web page to get the articles and summarize them with a huggingface model using the inference API. ```python import requests # Function to fetch text from the API def fetch_text_from_api(): url = ""https://lexper.p.rapidapi.com/v1.1/extract"" querystring = { ""url"": ""https://techcrunch.com/2023/11/25/neuralink-elon-musks-brain-implant-startup-quietly-raises-an-additional-43m/"", ""js_timeout"": ""30"", ""media"": ""true"" } headers = { ""X-RapidAPI-Key"": ""xxx"", ""X-RapidAPI-Host"": ""lexper.p.rapidapi.com"" } response = requests.get(url, headers=headers, params=querystring) data = response.json() # Extract the relevant text from the API response # Adjust the following line according to the structure of your API response return data.get('article', {}).get('text', '') # Function to summarize the text using Hugging Face API def query_huggingface(payload): API_URL = ""https://api-inference.huggingface.co/models/facebook/bart-large-cnn"" headers = {""Authorization"": f""Bearer {API_TOKEN}""} response = requests.post(API_URL, headers=headers, json=payload) return",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320367-crafting-ai-assistants-via-openai-and-hugging-face-api 130,Crafting AI Assistants via OpenAI and Hugging Face,"# **Hugging Face Inference API** ## Hugging Face Tasks response.json() # Fetch the text text_to_summarize = fetch_text_from_api() # Summarize the text summarization_payload = { ""inputs"": text_to_summarize, ""parameters"": {""do_sample"": False}, } summary_response = query_huggingface(summarization_payload) print(summary_response) ``` > [{'summary_text': 'Elon Musk-founded company raises $43 million in new venture capital. The company is developing implantable chips that can read brain waves. Critics say the company has a toxic workplace culture and unethical research practices. In June, Reuters reported that the company was valued at about $5 billion.'}] >",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320367-crafting-ai-assistants-via-openai-and-hugging-face-api 131,Crafting AI Assistants via OpenAI and Hugging Face,"# Conclusion In this lesson, we learned to use the OpenAI Assistants API, which enables tools like Code Interpreter and Knowledge Retrieval for enhanced functionality. Essential components like `Threads` and `Messages` facilitate user interaction, with the Assistant processing inputs and generating responses. We also demonstrated how an AI assistant can be deployed in a tech support example, utilizing these tools and methodologies for effective customer interaction. We also explored Hugging Face's free Inference API, which offers many models that can solve different tasks. Through practical examples, we demonstrated how to authenticate, access models via the Model Hub, and perform various NLP tasks, highlighting the API's versatility and ease of use in handling complex AI challenges. Through Function Calling, the OpenAI models can access the Hugging Face models via the free Inference API. --- **RESOURCES** [OpenAI Assistants API: Walk-through and Coding a Research Assistant](https://medium.com/@datadrifters/openai-assistants-api-walk-through-and-coding-a-research-assistant-3ddb3d767e99) [GitHub - huggingface/api-inference-community](https://github.com/huggingface/api-inference-community/) - OpenAI API Docs [Product](https://openai.com/product) - Assistants API Colab notebook [Google Colaboratory](https://colab.research.google.com/drive/1GkjoTZUd2rpwIvxmCZreKo5ZUZN6i7vJ) - OpenAI Knowledge Retrieval [OpenAI Platform](https://platform.openai.com/docs/assistants/tools/knowledge-retrieval) - Function Calling OpenAI [Functions with OpenAI Assistant API](https://tmmtt.medium.com/functions-with-openai-assistant-api-47e1481a0b42)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320367-crafting-ai-assistants-via-openai-and-hugging-face-api 132,LlamaIndex RAG-AGENT Query and Summarize Over Data,"# I**ntroduction** In this lesson, we explore the concept of agents in the LlamaIndex framework, with an emphasis on utilizing these agents as engines with internal reasoning and decision-making mechanisms. Creating an agent-based pipeline includes integrating our RAG-based application with data sources along with various tools. It is essential to remember that developing these tools for the agents requires a deep understanding of how users are likely to engage with the application and predict potential usage patterns. The goal of an RAG system is always to provide users with insightful content more effectively than extensive manual searches. Adding agents to our system is another step towards improving our product's user experience and decision-making ability. The LlamaIndex framework offers numerous possibilities for combining agents and tools to enhance the abilities of Large Language Models. We will examine the implementation of OpenAI agents with various data sources. Additionally, we'll create custom functions to boost the agent's capabilities in areas where they may lack information, such as mathematical operations. The rest of this lesson will demonstrate how these agents are capable of making decisions and integrating various resources to formulate a response. Before diving into codes, we must prepare our environment by installing the necessary packages and configuring the API keys. Execute the following command in your terminal to install the required packages using the Python Package Manager (PIP). Next, run the subsequent Python script to configure the API keys in your environment. Remember to obtain the keys from the OpenAI and Activeloop platforms and substitute them for the placeholders. ```bash pip install -q llama-index==0.9.14.post3 deeplake==3.8.8 openai==1.3.8 cohere==4.37 ``` ```bash import os os.environ['OPENAI_API_KEY'] = '' os.environ['ACTIVELOOP_TOKEN'] = '' ``` Now, let’s go through the next steps in detail.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320366-llamaindex-rag-agent-query-and-summarize-over-database 133,LlamaIndex RAG-AGENT Query and Summarize Over Data,"# OpenAI Agent ## Step 1: **Defining Data Sources** Discussing datasets in RAG mainly refers to data sources. It would be interesting to tag and track right from the start. This means keeping track of the general source of the data, whether it is from a specific book, documentation, or a blog. For example, the Towards AI RAG AI tutor currently has 5 “data sources”: Towards AI blogs, Activeloop documentation, LlamaIndex documentation, LangChain documentation, and HuggingFace documentation. Later, when we increase the dataset size with new data points, we add them to those sources or create new ones. Doing this process from the start will improve the chatbot’s efficiency by introducing the “routers” to focus on the related information source to answer a question. A key step in building a data-driven application with the LlamaIndex RAG system is discussing and selecting the appropriate dataset. The quality and relevance of the data are fundamental, as they directly influence the system's performance capabilities. A well-chosen dataset is essential to showcase and test the effectiveness of our RAG system accurately. It will be the same whether they are local files or hosted online with a vector store database like Deep Lake. However, note that using online tools like Deep Lake has built-in features to easily visualize, query, track, and manage your data. It is a good practice to start your RAG pipeline design with a small dataset, such as web articles. Setting up a foundational data environment that is manageable yet sufficiently rich is critical to ensuring a smooth start. This way, you can quickly test, debug, and, most importantly, understand your RAG system. You can easily query and evaluate responses on a dataset you control and grasp. The dataset for this lesson will comprise Nikola Tesla's life, work, and legacy, with detailed information about his innovations, personal history, and impact. We employ two text documents: the first with bold future predictions that Tesla mentioned during his lifetime and the second file with biographical details about his life. Let's import the files and set up the indexes. We will utilize a mix of data sources from the Deep Lake vector store for the first file and establish indexes from local storage for the second file. The initial step involves downloading the documents using the wget command. Alternatively, you can access and manually save the files from the URLs below. ```bash mkdir -p 'data/1k/' wget 'https://github.com/idontcalculate/data-repo/blob/main/machine_to_end_war.txt' -O './data/1k/tesla.txt' wget 'https://github.com/idontcalculate/data-repo/blob/main/prodigal_chapter10.txt' -O './data/1k/web.txt' ``` ### Store Indexes Deep Lake As previously stated, we'll read the first text file and process it for storage in Deep Lake. The `SimpleDirectoryReader` class in LlamaIndex can browse through a directory and transform text files into a `Document` object, facilitating processing. ```python from llama_index import SimpleDirectoryReader tesla_docs = SimpleDirectoryReader( input_files=[""/content/data/1k/tesla.txt""] ).load_data() ``` We are now ready to establish a database on the Activeloop platform by specifying the organization ID (which defaults to your username) and naming the database. The `DeepLakeVectorStore` class is used to create an empty database. ```python from llama_index.vector_stores import DeepLakeVectorStore # By default, the organization id is your username. my_activeloop_org_id = """" my_activeloop_dataset_name = ""LlamaIndex_tesla_predictions"" dataset_path = f""hub://{my_activeloop_org_id}/{my_activeloop_dataset_name}"" # Create an index over the documnts vector_store = DeepLakeVectorStore(dataset_path=dataset_path, overwrite=False) ``` ```python Your Deep Lake dataset has been successfully created! ``` Then, we can utilize the database object to create a storage context, allowing us to generate indexes (embeddings) and insert them into the database using the `VectorStoreIndex` class. ```python from llama_index.storage.storage_context import StorageContext from llama_index import VectorStoreIndex storage_context = StorageContext.from_defaults(vector_store=vector_store) tesla_index = VectorStoreIndex.from_documents(tesla_docs, storage_context=storage_context) ``` ```python Uploading data to deeplake dataset. 100%|██████████| 5/5 [00:00<00:00, 7.17it/s] /Dataset(path='hub://genai360/LlamaIndex_tesla_predictions', tensors=['text', 'metadata', 'embedding', 'id']) tensor htype shape dtype compression -------",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320366-llamaindex-rag-agent-query-and-summarize-over-database 134,LlamaIndex RAG-AGENT Query and Summarize Over Data,"# OpenAI Agent ## Step 1: **Defining Data Sources** ------- ------- ------- ------- text text (5, 1) str None metadata json (5, 1) str None embedding embedding (5, 1536) float32 None id text (5, 1) str None ``` The index we have created for the first file is ready to be integrated as a source in the pipeline. However, we must also process the second file before proceeding. ### Store Indexes Locally The method to save the index on your hard drive begins similarly to our earlier demonstration, employing the `SimpleDirectoryReader` class. ```python webtext_docs = SimpleDirectoryReader(input_files=[""/content/data/1k/web.txt""]).load_data() ``` Just as we utilized the `StorageContext` class earlier for employing the DeepLake database as storage, we can apply the same configuration but specify a directory to store the indexes. The following script initially attempts to load any pre-existing indexes if they were previously computed. If not, it uses the `.persist()` method to store the indexes. As indicated by the output, the index is generated. If you execute this code block again, it will retrieve the stored checkpoint instead of reprocessing and regenerating indexes. ```python try: # Try to load the index if it is already calculated storage_context = StorageContext.from_defaults( persist_dir=""/content/storage/webtext"" ) webtext_index = load_index_from_storage(storage_context) print(""Loaded the pre-computed index."") except: # Otherwise, generate the indexes webtext_index = VectorStoreIndex.from_documents(webtext_docs) webtext_index.storage_context.persist(persist_dir=""/content/storage/webtext"") print(""Generated the index."") ``` ```python Generated the index. ``` With data acquired from two distinct sources, let's utilize the query engine and its tools to develop an agent capable of integrating this information.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320366-llamaindex-rag-agent-query-and-summarize-over-database 135,LlamaIndex RAG-AGENT Query and Summarize Over Data,"# OpenAI Agent ## Step 2: ****Query Engine**** Once the index is established, the query engine used for searching and retrieving data from the index can be efficiently set up. ```python tesla_engine = tesla_index.as_query_engine(similarity_top_k=3) webtext_engine = webtext_index.as_query_engine(similarity_top_k=3) ``` The similarity parameter `top_k=3` is set to `3` for the search, which means to return the top 3 most similar results for a given query. As previously mentioned, the query engine tool comprises two distinct data sources. 1. The `tesla_engine` variable handles queries about general information. 2. The `webtext_engine` variable processes biographical data, focusing on inputs with factual content. This separation of data types ensures data quality when querying instead of always fetching from both sources with equal weight. With the query engine now constructed, the tools can be configured. We can use a combination of the `QueryEngineTool` class to create a new tool that includes a query engine and the `ToolMetaData` class, which assists in assigning names and descriptions to the tools. These descriptions will help the agent determine the most suitable data source based on the user's query. We will create a list of two tools, each representing one of our data sources. ```python from llama_index.tools import QueryEngineTool, ToolMetadata query_engine_tools = [ QueryEngineTool( query_engine=tesla_engine, metadata=ToolMetadata( name=""tesla_1k"", description=( ""Provides information about Tesla's statements that refers to future times and predictions. "" ""Use a detailed plain text question as input to the tool."" ), ), ), QueryEngineTool( query_engine=webtext_engine, metadata=ToolMetadata( name=""webtext_1k"", description=( ""Provides information about tesla's life and biographical data. "" ""Use a detailed plain text question as input to the tool."" ), ), ), ] ``` Here’s a clear schematic representation of our current system. The query engine is depicted at the top, signifying its role as the primary tool orchestrating everything. It is positioned centrally between the data sources and the process of formulating the final answer. It acts as a bridge between the proposed questions and their respective answers. ![Our baseline RAG pipeline with a query engine, data sources, and question-answer configuration.](LlamaIndex%20RAG-AGENT%20Query%20and%20Summarize%20Over%20Data%2088a5dfe909974fa8aade9542b8f3a1dd/LlamaIndex%20RAG-AGENT%20Query%20and%20Summarize%20Over%20Data%2091c649d25137470eb486285c008ba99c/image_2023-12-10_035919501.png) Our baseline RAG pipeline with a query engine, data sources, and question-answer configuration. After establishing the basic RAG mechanism with LlamaIndex, the next step is integrating an agent. This addition enables easy testing of the retrieval system. We can then add system design improvements and feature enhancements once the core functionality has been tested and verified.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320366-llamaindex-rag-agent-query-and-summarize-over-database 136,LlamaIndex RAG-AGENT Query and Summarize Over Data,"# OpenAI Agent ## ****Step 3: The Agent**** Now, let’s set up our agent. In this case, it will be the OpenAI agent. Integrating the query engine tools into the `OpenAIAgent` module from LlamaIndex enables the agent to execute queries. Setting the `verbose` argument to True is excellent for debugging. It will allow us to investigate which tool the agent is using and the intermediate steps. You could set the argument to False to only receive the final output. ```python from llama_index.agent import OpenAIAgent agent = OpenAIAgent.from_tools(query_engine_tools, verbose=True) ``` And that’s it! Now that we have our agent, we can execute an interactive chat interface (REPL, Read-Eval-Print Loop) where the agent can receive inputs (like questions or prompts), process them, and return responses, making it a conversational agent capable of handling a dialogue or chat session. ```python agent.chat_repl() ``` ``` ===== Entering Chat REPL ===== Type ""exit"" to exit. Human: What influenced Nikola Tesla to become an inventor? STARTING TURN 1 --------------- === Calling Function === Calling function: webtext_1k with args: { ""input"": ""What influenced Nikola Tesla to become an inventor?"" } Got output: Nikola Tesla was influenced to become an inventor by his studies of mechanical vibrations. He observed the selective response of objects to vibrations and realized the potential for producing effects of tremendous magnitude on physical objects. This led him to pursue research in the field of high-frequency and high-potential currents, which eventually resulted in his groundbreaking inventions. ======================== STARTING TURN 2 --------------- Assistant: Nikola Tesla was influenced to become an inventor by his studies of mechanical vibrations. He observed the selective response of objects to vibrations and realized the potential for producing effects of tremendous magnitude on physical objects. This led him to pursue research in the field of high-frequency and high-potential currents, which eventually resulted in his groundbreaking inventions. Human: exit ``` ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320366-llamaindex-rag-agent-query-and-summarize-over-database 137,LlamaIndex RAG-AGENT Query and Summarize Over Data,"# Agents with Custom Function We explored the potential of creating Query Engine tools to enhance an OpenAI-based agent with additional data sources. We observed the capability of agents to select the appropriate tool based on the user's prompt. This decision-making ability can be applied across a broad range of applications. For instance, one area where Large Language Models typically fall short is mathematical operations. A basic addition or subtraction equation, which may seem straightforward to many, can be challenging for these models. A practical solution to this issue is to equip the models with tools like a calculator for use as needed. This section will create a custom function that a chatbot can access for essential multiplication or addition calculations whenever required. Initially, we must define a custom function tailored to each task. These custom functions can accept an arbitrary number of inputs and generate an output. Their capabilities can range from a simple addition operation, as in our example, to more complex tasks such as conducting web searches, querying other Large Language Models, or utilizing data from external APIs to answer a question. ```python def multiply(a: int, b: int) -> int: """"""Multiply two integers and returns the result integer"""""" return a * b def add(a: int, b: int) -> int: """"""Add two integers and returns the result integer"""""" return a + b from llama_index.tools import FunctionTool multiply_tool = FunctionTool.from_defaults(fn=multiply, name=""multiply"") add_tool = FunctionTool.from_defaults(fn=add, name=""add"") all_tools = [multiply_tool, add_tool] ``` The above code establishes two functions, titled 'add' and 'multiply'. It is crucial in this setup to specify data types for the input arguments (`a:int, b:int`), the return type of the function (`->int`), and a concise explanation of the function's purpose, provided within the triple quotes beneath the function name. These details will be used by the `FunctionTool` class’s `.from_defaults()` method to form a description of the function, which can then be used by the agent. The final variable holds a list of all the available tools. These tools can be used to construct an `ObjectIndex`, which is a wrapper class linking a `VectorStoreIndex` with multiple possible tools. Initially, it's necessary to utilize the `SimpleToolNodeMapping` tool to transform the tool implementations into nodes and then tie everything together. ```python from llama_index import VectorStoreIndex from llama_index.objects import ObjectIndex, SimpleToolNodeMapping tool_mapping = SimpleToolNodeMapping.from_objects(all_tools) obj_index = ObjectIndex.from_objects( all_tools, tool_mapping, VectorStoreIndex, ) ``` Note that we do not incorporate any data source in this implementation. This approach is intentional, as we aim to enhance the capabilities of Large Language Models with additional tools. In the next code block, you will see that we are utilizing the defined object index as a retriever! This implies that within the LlamaIndex framework, the custom functions are treated as additional data sources. So, we describe the agent object using the `FnRetrieverOpenAIAgent` class. ```python from llama_index.agent import FnRetrieverOpenAIAgent agent = FnRetrieverOpenAIAgent.from_retriever( obj_index.as_retriever(), verbose=True ) ``` Ultimately, we can employ the agent to ask questions, and the agent utilizes the multiply function to provide answers. ```python agent.chat(""What's 12 multiplied by 22? Make sure to use Tools"") ``` ``` STARTING TURN 1 --------------- === Calling Function === Calling function: multiply with args: { ""a"": 12, ""b"": 22 } Got output: 264 ======================== STARTING TURN 2 --------------- AgentChatResponse(response='12 multiplied by 22 is 264.', sources=[ToolOutput(content='264', tool_name='multiply', raw_input={'args': (), 'kwargs': {'a': 12, 'b': 22}}, raw_output=264)], source_nodes=[]) ``` In the previous example, we specified in the prompt that the agent should utilize the tools. Additionally, it's possible to employ the `tool_choice` argument to explicitly direct the agent to use specific tools or to use the `auto` keyword to let the agent decide. ```python response = agent.chat( ""What is 5",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320366-llamaindex-rag-agent-query-and-summarize-over-database 138,LlamaIndex RAG-AGENT Query and Summarize Over Data,"# Agents with Custom Function + 2?"", tool_choice=""add"" ) ``` ``` STARTING TURN 1 --------------- === Calling Function === Calling function: add with args: { ""a"": 5, ""b"": 2 } Got output: 7 ======================== STARTING TURN 2 --------------- AgentChatResponse(response='5 + 2 is equal to 7.', sources=[ToolOutput(content='7', tool_name='add', raw_input={'args': (), 'kwargs': {'a': 5, 'b': 2}}, raw_output=7)], source_nodes=[]) ```",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320366-llamaindex-rag-agent-query-and-summarize-over-database 139,LlamaIndex RAG-AGENT Query and Summarize Over Data,"# Agents from LlamaHub Agents can offer a broad range of functionalities, significantly extending the capabilities of Large Language Models into unexplored realms. LlamaHub streamlines the curation, sharing, and usage of more than 30 agents, achievable with just one line of code. We have already explored its application for scrapping data from Wikipedia in the LlamaIndex Unlocked lesson. To see a complete list of implemented agents, [click here](https://llamahub.ai/?tab=tools). ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320366-llamaindex-rag-agent-query-and-summarize-over-database 140,LlamaIndex RAG-AGENT Query and Summarize Over Data,"# Conclusion In this lesson, we discussed how to utilize agents to enhance the capabilities of Large Language Models by integrating new tools that unlock their potential. We experimented with employing these agents as decision-making functions to incorporate various data sources in response to user queries. Additionally, we explored their use as reasoning machines, combined with custom functions, to further amplify their abilities. The ability to make function calls is a potent aspect of designing agents, enabling the easy integration of additional information into the model from virtually any imaginable resource. --- >> [Notebook](https://colab.research.google.com/drive/1dYeR3M14BOwC7hkWS5Yo_zMFD4HMJpQn?usp=sharing). --- ### R**esources:** the RAG-AGENT example notebook: [Google Colaboratory](https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/agent/openai_agent_with_query_engine.ipynb#scrollTo=f9f3158a-7647-4442-8de1-4db80723b4d2) - LlamaHub on GitHub: [](https://github.com/run-llama/llama-hub/tree/main/llama_hub/tools/notebooks) - data agents [Data Agents - LlamaIndex 🦙 0.9.11.post1](https://docs.llamaindex.ai/en/stable/module_guides/deploying/agents/root.html) - [https://docs.llamaindex.ai/en/latest/examples/agent/openai_agent_with_query_engine.html](https://docs.llamaindex.ai/en/latest/examples/agent/openai_agent_with_query_engine.html) - [https://docs.llamaindex.ai/en/latest/examples/agent/multi_document_agents.html](https://docs.llamaindex.ai/en/latest/examples/agent/multi_document_agents.html)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320366-llamaindex-rag-agent-query-and-summarize-over-database 141,Course Introduction and Logistics,"# Course Introduction and Logistics ## Introduction to the “Retrieval Augmented Generation for Production with LlamaIndex and LangChain” Course Activeloop, Towards AI, and the Intel Disruptor Initiative are excited to collaborate to bring Gen AI 360: Foundational Model Certification Course for aspiring Generative AI professionals, executives, and enthusiasts of tomorrow. Following the success of our ""[LangChain & Vector Databases In Production](Course%20Introduction%20and%20Logistics%20d5eb79d71a2f4d1e9aa986a2f0538b3f.md)"" and ""[Training and Fine-tuning LLMs for Production](https://www.notion.so/Careers-Activeloop-9dad048520b6452e8954830d61250c5a?pvs=21)” courses, we're excited to welcome you to the third part of the series: “Retrieval Augmented Generation for Production with LlamaIndex and LangChain.” In this course, you'll learn how to build on RAG techniques learned in the first Langchain and Vector DBs in Production course. Apart from a primer on novel LangChain RAG frameworks, you will learn advanced RAG techniques with LlamaIndex and how to build RAG agents and RAG evaluation systems. This course will guide you on the optimal methods and practices for getting RAG production-ready with plenty of applied industry project examples. Let's get started! ### **Why This Course?** The “Retrieval Augmented Generation for Production with LlamaIndex and LangChain” course provides the theoretical knowledge and practical skills necessary to build advanced RAG products. Many human tasks across various industries can be assisted with AI by combining LLMs, prompting, RAG, and fine-tuning workflows. We are huge fans of RAG because it helps with 1) reducing hallucinations by limiting the LLM to answer based on existing documentation, 2) helping with explainability, error checking, and copyright issues by clearly referencing its sources for each comment, 3) giving private/specific or more up-to-date data to the LLM, 4) and not relying on black box LLM training/fine tuning for what the models know and has memorized. We touched upon basic RAG in our first Langchain and Vector DBs course, but building more advanced and reliable products requires more complex techniques and iterations of the model. The 'Retrieval Augmented Generation for Production with LlamaIndex and LangChain' course aims to provide you with the theoretical knowledge and practical skills necessary to develop products and applications centered on RAG. A fundamental pillar of our course is the focus on hands-on learning. Real-world application and experimentation are crucial for a deep understanding and effective use of RAG techniques. In this course, you will move beyond basic RAG apps, develop these applications with more advanced techniques, build RAG agents, and evaluate the performance of RAG systems. ### **Who Should Take This Course?** Whether planning to build a chat with data application for your organization or just learning how to leverage Generative AI in various industries, this course is for you. The course addresses critical issues such as reducing hallucinations in AI outputs, enhancing explainability, addressing copyright concerns, and offering more tailored, up-to-date data inputs. We go beyond basic RAG applications, equipping you with the skills to create more complex, reliable products with tools like LangChain, LlamaIndex, and Deep Memory. Emphasizing hands-on learning, this course is a gateway to mastering advanced RAG techniques and applications in real-world scenarios. Please note that prior knowledge of coding and Python is a prerequisite. ### **What You Will Learn** You will start by learning the basic RAG tools, such as loading, indexing, storing, and querying in both Langchain and LlamaIndex. We’ll also demystify the two libraries to help you select the right one when working with RAG or other LLM applications. You will then move towards more advanced RAG techniques aimed at surfacing and using more relevant information from the dataset. We cover techniques such as Query expansion, Transformation reranking, recursive retrieval, optimization, and production tips and techniques with LlamaIndex. We also introduce how better embedding management through Activeloop’s Deep Memory can be used to improve accuracy. We then progress to the exciting stuff: learning how",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320308-course-introduction-gen-ai-360-foundational-model-certification 142,Course Introduction and Logistics,"# Course Introduction and Logistics ## Introduction to the “Retrieval Augmented Generation for Production with LlamaIndex and LangChain” Course to build RAG agents in Langchain and Llamaindex, an introduction to OpenAI assistants and some other tools & models that can be used in RAG products. We conclude with a summary of RAG evaluation techniques in LlamaIndex together with an introduction to Langsmith in Langchain. ### Is the Course Free? Yes, the course is entirely free for everybody. However, running the project examples yourself will cost you some API and cloud credits. ### **Certification** By participating in this course and completing the quizzes at the end of each chapter, you will have the opportunity to earn a certification in using Deep Lake - a valuable addition to your professional credentials. This certification program, offered at no cost, forms part of the Deep Lake Foundational Model Certification Program in collaboration with Intel Disruptor Initiative and Towards AI.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320308-course-introduction-gen-ai-360-foundational-model-certification 143,Course Introduction and Logistics,"# Course Introduction and Logistics ## Course Logistics Here's everything you need to know about the course. ### **Course Hosting and Pace** This course is hosted by **Activeloop**. It is designed as a **self-paced** learning journey, allowing you to proceed at your own comfort. The online format provides flexibility to engage with the lessons whenever it best suits you. At the end of each module, you can test your new knowledge with multiple-choice quizzes, which are mandatory to continue the course. You will receive your course certification after completing all the quizzes. ### **Community Support** Have questions about this course or specific lessons? Want to exchange ideas with fellow learners? We encourage active interaction in the dedicated forum in the *[Towards AI’s Learn AI Together Discord Community](https://discord.com/invite/learnaitogether), a* vibrant community of over 55,000 AI experts and enthusiasts. Our community has a dedicated channel for this course where you can ask questions and share insights. For queries specifically related to Deep Lake, please join the *[Deep Lake Slack community](https://join.slack.com/t/hubdb/shared_invite/zt-ivhsj8sz-GWv9c5FLBDVw8vn~sxRKqQ),* where experts and users will be ready to assist. ### **Required Platforms, Tools, and Cloud Tokens** The course involves practical projects and exercises that require various tools and platforms. These will be thoroughly guided in the individual lessons. However, the main platforms that you will use throughout the course are: - **Activeloop’s Deep Lake** - **Open AI** - **LlamaIndex** - **Langchain** - **Langchain’s Langsmith**",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320308-course-introduction-gen-ai-360-foundational-model-certification 144,Course Introduction and Logistics,"# Course Introduction and Logistics ## **What is Activeloop?** [Activeloop](https://www.activeloop.ai/) is a tech company dedicated to building data infrastructure optimized for deep-learning applications. It offers a platform that seamlessly connects unstructured data types, like audio, video, and images, to machine learning models. Their main product, Deep Lake, ensures data streaming, scalable machine learning pipelines, and dataset version control. Such infrastructures are particularly beneficial when dealing with the demands of training and fine-tuning models for production. ### **What is Deep Lake?** Deep Lake is an open-source data lake designed for deep learning applications. It retains essential features of traditional data lakes, including SQL queries, ACID transactions, and dataset visualization. It specializes in storing complex data in tensor form, efficiently streaming data to deep learning frameworks. Built to be serverless on a columnar storage format, it also offers native version control and in-browser data visualization, complementing the needs of LLM training and deployment processes. ### Activeloop **Credits - Free Extended Trial for Deep Lake Growth Plan for a Month** As a part of the course, all course takers can redeem a free extended trial of one month for the Activeloop Growth plan by redeeming GENAI360 promo code at checkout. To redeem the plan ### **How to set up a Deep Lake account?** To set up a Deep Lake account, navigate to the [app’s registration page](https://app.activeloop.ai/register/) and sign up. Follow the on-screen instructions and add the required details. Once you've verified your email and established a secure password, your account will be active and ready for use. **How to get the Deep Lake API token?** 1. After logging in, you should see your homepage. You should now see a “Create API token” button at the top of your homepage. Click on it, and you’ll get redirected to the “API tokens” page. This is where you can generate, manage, and revoke your API keys for accessing Deep Lake. 2. Click on the ""Create API token"" button. You should see a popup asking for a token name and an expiration date. By default, the token expiration date is one year. Once you’ve set the token name and its expiration date, click the “Create API token” button. 3. You should now see a green banner saying that the token has been successfully generated, along with your new API token, on the “API tokens” page. To copy your token to your clipboard, click the square icon on its right.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320308-course-introduction-gen-ai-360-foundational-model-certification 145,Course Introduction and Logistics,"# Course Introduction and Logistics ## **Coding Environment and Packages** Before starting this course, you need to ensure that you have the appropriate coding environment ready. Please make sure to use a Python version equal to or later than **3.8.1**. You can set up your environment by choosing one of the following options: 1. Having a code editor installed on your computer. A popular coding environment is [Visual Studio Code](https://code.visualstudio.com/). 2. Using Python virtual environments to manage Python libraries. 3. Alternatively, you could use Google Colab notebooks. You will need the following packages to successfully execute the sample codes provided in each lesson. They can be installed using the `pip` package manager. ``` deeplake==3.6.19 openai==0.27.8 tiktoken==0.4.0 transformers==4.32.0 torch==2.0.1 numpy==1.23.5 deepspeed==0.10.1 trl==0.7.1 peft==0.5.0 wandb==0.15.8 bitsandbytes==0.41.1 accelerate==0.22.0 tqdm==4.66.1 neural_compressor===2.2.1 onnx===1.14.1 pandas==2.0.3 scipy==1.11.2 ``` While we strongly recommend installing the latest versions of these packages, please note that the codes have been tested with the versions specified in parentheses. Moreover, specific lessons may require the installation of additional packages, which will be explicitly mentioned. The following code will demonstrate how to install a package using pip. ```bash pip install deeplake # Or: (to install an specific version) # pip install deeplake==3.6.5 ``` ### **Google Colab** Google Colaboratory, popularly known as Google Colab, is a *free cloud-based Jupyter notebook environment*. Data scientists and engineers widely use it to train machine learning and deep learning models using CPUs, GPUs, and TPUs. Google Colab comes with an array of features, such as: - Free access to GPUs and TPUs for accelerated model training. - A web-based interface for a service running on a virtual machine, eliminating the need for local software installation. - Seamless integration with Google Drive and GitHub. To use Google Colab, all you need is a Google account. You can run terminal commands directly in notebook cells by appending an exclamation mark (!) before the command. Every notebook created in Google Colab gets stored in your Google Drive for easy access. A convenient way of using API keys in Colab involves: 1. Saving them in a file named `.env` on your Google Drive. Here’s how the file should be formatted to save the Activeloop token and the OpenAI API key. ```python ACTIVELOOP_TOKEN=your_activeloop_token OPENAI_API_KEY=your_openai_key ``` 1. Mounting your Google Drive on your Colab instance. 2. Loading them as environment variables using the **`dotenv`** library, like in the following code. ```python from dotenv import load_dotenv load_dotenv('/content/drive/MyDrive/path/to/.env') ``` ### **Creating Python Virtual Environments** Python virtual environments offer an excellent solution for managing Python libraries and avoiding package conflicts. They create isolated environments for installing packages, ensuring that your packages and their dependencies are contained within that environment. This setup provides clean and isolated environments for your Python projects. Begin by executing the `python` command in your terminal to confirm that the Python version is either equal to or greater than 3.8.1. Then follow these steps to create a virtual environment: 1. Create a virtual environment using the command **`python -m venv my_venv_name`**. 2. Activate the virtual environment by executing **`source my_venv_name/bin/activate`**. 3. Install the required libraries and run the code snippets from the lessons within the virtual environment. *Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.* ### **📝 Legal Note on Availability of Credits & Applicable Terms 📝** **Availability of Credits:** - **Validity:** Credits acquired upon completion of the GenAI360 Foundational Model Certification Course must be redeemed within [12] months of the",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320308-course-introduction-gen-ai-360-foundational-model-certification 146,Course Introduction and Logistics,"# Course Introduction and Logistics ## **Coding Environment and Packages** course completion date, after which they will become void. - **Usage:** Credits are applicable for use across select programs, courses, workshops, or other specified opportunities offered by Activeloop and/or other partners and cannot be transferred to external platforms or exchanged for cash or other forms of payment. **Terms & Conditions:** 1. **Eligibility:** Credits are available only to qualifying participants who have successfully completed the GenAI360 Foundational Model Certification Course and have adhered to all course and evaluation criteria. 2. **Redemption:** Credits can only be redeemed by the individual participant to whom they were issued and are non-transferable to other individuals or entities. 3. **Modification:** Activeloop reserves the right to modify the availability, applicability, or value of credits without prior notice. Participants will be notified of any such changes via registered email. 4. **Verification:** All participant details and attached documents will undergo a verification process. Any discrepancies or fraudulent information may result in the forfeiture of credits and possible exclusion from future programs. 5. **Data Protection:** Participant data will be handled with utmost confidentiality and will only be used for the purpose of credit redemption in compliance with applicable data protection regulations. 6. **Limitation:** The credits are not stackable and cannot be combined with other ongoing offers or discounts unless explicitly mentioned. 7. **Discrepancies:** In case of any discrepancies or issues related to credit redemption, decisions made by Activeloop will be considered final. 8. **Communication:** Participants agree to receive communications regarding the credit redemption process and other relevant information from GenAI360. 9. **Governing Law:** The credit redemption process and associated terms are governed by the applicable laws in the jurisdiction of Activeloop’s registered location. By redeeming the credits, participants acknowledge and agree to adhere to the aforementioned terms and conditions. Any violation of the same may result in appropriate actions as deemed suitable by Activeloop.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320308-course-introduction-gen-ai-360-foundational-model-certification 147,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 This guide is your quickstart toolkit for integrating LlamaIndex with Activeloop Deep Lake. You'll learn how to effortlessly index GitHub repositories into Deep Lake and interact with your code through natural language queries.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 148,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## What are LlamaIndex and Activeloop Deep Lake? ### LlamaIndex: Your Data Framework for LLMs [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/) is a bridge between your data and Language Learning Models (LLMs). Whether your data resides in APIs, SQL databases, or PDFs, LlamaIndex ingests and structures it into a format easily consumable by LLMs. It offers features like data connectors for various data sources, indexing capabilities for quick retrieval, and natural language query engines that make your data not just accessible but also interactive. It also offers [Llama Hub](https://llamahub.ai/), a platform aggregating custom plugins for all data types. ### Activeloop Deep Lake: Optimized Data Lake for ML [Activeloop](https://www.activeloop.ai/) Deep Lake is a data lake solution specifically designed for machine learning workflows. Unlike traditional data lakes, it's optimized for quick data retrieval and manipulation, making it an ideal choice for machine learning projects that require efficient data access. It supports various data types and formats, from images and videos to more complex data structures, while maintaining high performance. You can create local vector stores or use the managed serverless service. ### The Synergy When LlamaIndex and Activeloop Deep Lake are combined, they offer a robust, efficient, and interactive data management solution. LlamaIndex takes care of ingesting and structuring your data, while Activeloop Deep Lake provides optimized storage and retrieval capabilities. In this guide, we'll see how to store and interact with your code repositories through natural language queries, offering a unique and powerful way to manage and understand your codebase.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 149,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## Requirements Before getting started, make sure you have the following: - [Python](https://www.python.org/downloads/) - Version 3.7 or newer is required. - An active account on OpenAI, along with an [OpenAI API key](https://platform.openai.com/account/api-keys). - A Deep Lake account, complete with a [Deep Lake API key](https://app.activeloop.ai/?utm_source=referral&utm_medium=platform&utm_campaign=signup_promo_settings&utm_id=plg). - a 'classic' personal token from [GitHub](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token).",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 150,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## Getting Started Creating a new Python virtual environment for this project is strongly advised. It helps maintain a tidy workspace by keeping dependencies in one place. - Create a Python virtual environment with: ```python python3 -m venv repo-ai ``` Then activate it with: ```python source repo-ai/bin/activate ```",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 151,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## Install the required packages This project will need Python packages, including LLamaIndex and Deep Lake. Run the following pip command: ```jsx pip install llama-index deeplake openai python-dotenv ``` Let's understand what we are installing and why. ### 1. `llama-index` ### What is it? LlamaIndex is a data framework that works with Language Learning Models (LLMs). It helps ingest, structure, and make data accessible through natural language queries. ### Key Features: - **Data Connectors**: Ingest data from various sources like APIs, SQL databases, and PDFs. - **Data Indexing**: Structure the ingested data for quick and efficient retrieval. - **Query Engines**: Enable natural language queries to interact with your data. ### Use-Case in the guide: In the context of this guide, LlamaIndex will be used to index GitHub repositories and make them queryable through natural language interfaces. ### 2. `deeplake` ### What is it? Activeloop Deep Lake is a specialized data lake optimized for machine learning workflows. It allows for efficient storage and retrieval of various data types. ### Key Features: - **Optimized Storage**: Designed for quick data retrieval, ideal for machine learning applications. - **Data Type Support**: Handles multiple data types like images, videos, and complex data structures. ### Use-Case in the guide: Deep Lake is the storage layer where the GitHub repositories indexed by LlamaIndex will be stored. ### 3. `openai` ### What is it? The OpenAI Python package provides an interface to OpenAI's GPT models and other services. It allows you to make API calls to interact with these models. ### Key Features: - **API Integration**: Easy integration with OpenAI's GPT models. - **Text Generation**: Generate text based on the model's training data and capabilities. ### Use-Case in the guide: LLamaIndex uses this package to interact with the OpenAI models. ### 4. `python-dotenv` ### What is it? Python-dotenv is a library that allows you to specify environment variables in a `.env` file, making it easier to manage configurations. ### Key Features: - **Environment Variable Management**: Store configuration variables in a `.env` file. - **Easy Import**: Automatically import variables from `.env` into your Python environment. ### Use-Case in the guide: This package manages the API keys.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 152,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## How does LLamaIndex work? In the context of leveraging LlamaIndex for data-driven applications, the underlying logic and workflow are pretty simple. Here's a breakdown: - **Load Documents**: The first step involves loading your raw data into the system. You can do this manually, directly inputting the data, or through a data loader that automates the process. LlamaIndex offers specialized data loaders that can ingest data from various sources, transforming them into Document objects, and you can find many plugins on Llama Hub. This is a crucial step as it sets the stage for the subsequent data manipulation and querying functionalities. - **Parse the Documents into Nodes**: Once the documents are loaded, they are parsed into Nodes, essentially structured data units. These Nodes contain chunks of the original documents and carry valuable metadata and relationship information. This parsing process is vital as it organizes the raw data into a structured format, making it easier and more efficient for the system to handle. - **Construct an Index from Nodes or Documents**: After the Nodes are prepared, an index is constructed to make the data searchable and queryable. Depending on your needs, this index can be built directly from the original documents or the parsed Nodes. The index is often stored in structures like `VectorStoreIndex`, optimized for quick data retrieval. This step is the system's heart, turning your structured data into a robust, queryable database. - **Query the Index**: With the index in place, the final step is to query it. A query engine is initialized, allowing you to make natural language queries against the indexed data. This is where the magic happens: you can conversationally ask the system questions, and it will sift through the indexed data to provide accurate and relevant answers.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 153,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## Let's code Now that the environment is ready and all the explanations are out of the way let's start with some code. In your project's directory, create a new file named `.env`, paste the following, and add your API keys: ```python GITHUB_TOKEN=""YOUR_GH_CLASSIC_TOKEN"" OPENAI_API_KEY=""YOUR_OPENAI_KEY"" ACTIVELOOP_TOKEN=""YOUR_ACTIVELOOP_TOKEN"" DATASET_PATH=""hub://YOUR_ORG/repository_vector_store"" ``` > Remember to edit this line DATASET_PATH=""hub://YOUR_ORG/repository_vector_store"", adding your Activeloop organization's ID to the URL. > After this, create a new file named `main.py` and paste the following code: ```python import os import textwrap from dotenv import load_dotenv from llama_index import download_loader from llama_hub.github_repo import GithubRepositoryReader, GithubClient from llama_index import VectorStoreIndex from llama_index.vector_stores import DeepLakeVectorStore from llama_index.storage.storage_context import StorageContext import re # Load environment variables load_dotenv() # Fetch and set API keys openai_api_key = os.getenv(""OPENAI_API_KEY"") active_loop_token = os.getenv(""ACTIVELOOP_TOKEN"") dataset_path = os.getenv(""DATASET_PATH"") def parse_github_url(url): pattern = r""https://github\.com/([^/]+)/([^/]+)"" match = re.match(pattern, url) return match.groups() if match else (None, None) def validate_owner_repo(owner, repo): return bool(owner) and bool(repo) def initialize_github_client(): github_token = os.getenv(""GITHUB_TOKEN"") return GithubClient(github_token) def main(): # Check for OpenAI API key openai_api_key = os.getenv(""OPENAI_API_KEY"") if not openai_api_key: raise EnvironmentError(""OpenAI API key not found in environment variables"") # Check for GitHub Token github_token = os.getenv(""GITHUB_TOKEN"") if not github_token: raise EnvironmentError(""GitHub token not found in environment variables"") # Check for Activeloop Token active_loop_token = os.getenv(""ACTIVELOOP_TOKEN"") if not active_loop_token: raise EnvironmentError(""Activeloop token not found in environment variables"") github_client = initialize_github_client() download_loader(""GithubRepositoryReader"") github_url = input(""Please enter the GitHub repository URL: "") owner, repo = parse_github_url(github_url) while True: owner, repo = parse_github_url(github_url) if validate_owner_repo(owner, repo): loader = GithubRepositoryReader( github_client, owner=owner, repo=repo, filter_file_extensions=( ["".py"", "".js"", "".ts"", "".md""], GithubRepositoryReader.FilterType.INCLUDE, ), verbose=False, concurrent_requests=5, ) print(f""Loading {repo} repository by {owner}"") docs = loader.load_data(branch=""main"") print(""Documents uploaded:"") for doc in docs: print(doc.metadata) break # Exit the loop once the valid URL is processed else: print(""Invalid GitHub URL. Please try again."") github_url = input(""Please enter the GitHub repository URL: "") print(""Uploading to vector store..."") # ====== Create vector store and upload data ====== vector_store = DeepLakeVectorStore( dataset_path=dataset_path, overwrite=True, runtime={""tensor_db"": True}, ) storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents(docs, storage_context=storage_context) query_engine = index.as_query_engine() # Include a simple question to test. intro_question = ""What is the repository about?"" print(f""Test question: {intro_question}"") print(""="" * 50) answer = query_engine.query(intro_question) print(f""Answer: {textwrap.fill(str(answer), 100)} \n"") while True: user_question = input(""Please enter your question (or type 'exit' to quit): "") if user_question.lower() == ""exit"": print(""Exiting, thanks for chatting!"") break print(f""Your question: {user_question}"") print(""="" * 50) answer = query_engine.query(user_question) print(f""Answer: {textwrap.fill(str(answer), 100)} \n"") if __name__ == ""__main__"": main() ``` ### Main Function (`main()`)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 154,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## Understanding the code At first glance, a lot is happening here; let's review it. Below is a step-by-step breakdown: ### Initialization and Environment Setup 1. **Import Required Libraries**: The script starts by importing all the necessary modules and packages. 2. **Load Environment Variables**: Using `dotenv`, it loads environment variables stored in the `.env` file. This is where API keys and tokens are stored securely. ### Helper Functions 1. **`parse_github_url(url)`:** This function takes a GitHub URL and extracts the repository owner and name using regular expressions. 2. **`validate_owner_repo(owner, repo)`:** Validates that both the repository owner and name are present. 3. **`initialize_github_client()`:** Initializes the GitHub client using the token fetched from the environment variables. 4. **API Key Checks**: Before proceeding, the script checks for the presence of the OpenAI API key, GitHub token, and Activeloop token, raising an error if any are missing. 5. **Initialize GitHub Client**: Calls `initialize_github_client()` to get a GitHub client instance. 6. **User Input for GitHub URL**: Asks the user to input a GitHub repository URL. 7. **URL Parsing and Validation**: Parses the URL to get the repository owner and name and validates them. 8. **Data Loading**: If the URL is valid, it uses `GithubRepositoryReader` from `llama_index` to load the repository data, specifically Python and Markdown files. 9. **Indexing**: The loaded data is then indexed using `VectorStoreIndex` and stored in a DeepLake vector store. This makes the data queryable. 10. **Query Engine Initialization**: Initializes a query engine based on the indexed data. 11. **Test Query**: Performs a test query to demonstrate the system's operation. 12. **User Queries**: Enters a loop where the user can input natural language queries to interact with the indexed GitHub repository. The loop continues until the user types 'exit'. ### Execution Entry Point - The script uses the standard `if __name__ == ""__main__"":` Python idiom to ensure that `main()` is called when the script is executed directly.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 155,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## A Closer Look at the GithubRepositoryReader loader For this project, we used the `github_repo` data loader from LLamaIndex, and you can find its documentation on the [Llama Hub](https://llamahub.ai/l/github_repo). This is the part of the code taking care of that: ```python loader = GithubRepositoryReader( github_client, owner=owner, repo=repo, filter_file_extensions=(["".py"", "".js"","".ts"", "".md""], GithubRepositoryReader.FilterType.INCLUDE), verbose=False, concurrent_requests=10, ) ``` When it comes to loading data from a GitHub repository, the star of the show is the `GithubRepositoryReader` class. This class is a powerhouse designed to fetch, filter, and format repository data for indexing. Let's break down its key components: - **GitHub Client**: You'll first notice that the initialized GitHub client is passed into `GithubRepositoryReader`. This client provides authenticated access to GitHub repositories. - **Repository Details**: Next, the repository owner and name are specified. These are extracted from the URL you input, ensuring the data is fetched from the correct source. This is to give a nice message in the console. - **File Type Filters**: One of the most flexible features here is the ability to specify which file types to load. In this example, we're focusing on Python, JavaScript, TypeScript, and Markdown files. The inclusion of Markdown files is so the reader will also pull in README files, offering valuable context for the language model's understanding. - **Verbose Logging**: If you're the kind of person who likes to see every detail, you can enable verbose logging. This will print out detailed logs of the data loading process. - **Concurrent Requests**: This is where you can speed things up. The number of concurrent requests specifies how many data-fetching operations will happen simultaneously. A word of caution, though: cranking this number up could make you hit GitHub's rate limits, so tread carefully. Once all these parameters are set, the `load_data()` method swings into action. It fetches the repository data and neatly packages it into a list of `Document` objects, ready for the next stage— indexing. Let's get into the intricacies of the indexing part of the code, which is arguably one of the most crucial steps in the entire process.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 156,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## Indexing: Transforming Data into Queryable Intelligence ```python # ====== Create vector store and upload data ====== vector_store = DeepLakeVectorStore( dataset_path=dataset_path, overwrite=True, runtime={""tensor_db"": True}, ) storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents(docs, storage_context=storage_context) query_engine = index.as_query_engine() ``` After the data is loaded, the next task is to make this raw information searchable and retrievable. This is where the `VectorStoreIndex` and `DeepLakeVectorStore` come into play. Let's dissect how they work: - **Vector Store**: The first thing that happens is the creation of a `DeepLakeVectorStore`. Think of this as a specialized database designed to hold vectors. It's a storage unit and an enabler for high-speed queries. The `dataset_path` parameter specifies where this vector store will reside, and the `overwrite` flag allows you to control whether existing data should be replaced. The parameter `runtime={""tensor_db"": True}` specifies that the vector store will use the Managed Tensor Database for storage and query execution on the Deep Lake infrastructure. The default here is to create a cloud vector DB. Instead, You can create a local vector DB by changing `dataset_path` to the name of the directory you want to use as DB. For example - dataset_path = ""repository_db"" The vector store will create a directory with the name specified in `dataset_path`. - **Storage Context**: Next up is the `StorageContext`, which essentially acts as a manager for the vector store. It ensures the vector store is accessible and manageable throughout the indexing process. - **From Documents to Index**: The `VectorStoreIndex.from_documents()` method is the workhorse here. It takes the list of `Document` objects you got from the GitHub repository and transforms them into a searchable index. This index is stored in the previously initialized `DeepLakeVectorStore`.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 157,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## The Interactive Finale: Querying and User Engagement After the meticulous data loading and indexing process, the stage is set for the user to interact with the system. This is where the query engine comes into play. - **Query Engine**: Once the index is built, a query engine is initialized using `index.as_query_engine()`. You'll interact with this engine when you want to search through the GitHub repository. It's optimized for speed and accuracy, ensuring your natural language queries return the most relevant results. - **Introductory Question**: The code starts with an introductory question: ""What is the repository about?"" This serves multiple purposes. It acts as a litmus test to ensure the system is operational and gives the user an immediate sense of what questions can be asked. - **Formatting and Display**: The answer is then formatted to fit within a 100-character width for better readability, thanks to Python's `textwrap` library. - **User Input**: The code enters an infinite loop, inviting the user to ask questions. The user can type any query related to the GitHub repository, and the system will attempt to provide a relevant answer. - **Exit Strategy**: The loop continues indefinitely until the user types 'exit', providing a simple yet effective way to end the interaction. - **Query Execution**: Each time the user asks a question, the `query_engine.query()` method is called. This method consults the index built earlier and retrieves the most relevant information. - **Answer Presentation**: Like the introductory question, the answer to the user's query is formatted and displayed. This ensures that regardless of the complexity or length of the solution, it's presented in a readable manner.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 158,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## Running the code Now, you're fully equipped to index and query GitHub repositories. To tailor the system to your specific needs, pay attention to the `filter_file_extensions` parameter. This is where you specify which types of files you'd like to include in your index. The current setting—`["".py"", "".js"", "".ts"", "".md""]`—focuses on Python, JavaScript, TypeScript, and Markdown files. Consider storing these extensions in your `.env` file for a more dynamic setup. You can easily switch between different configurations without modifying the codebase. It's a best practice that adds a layer of flexibility to your system. Run the command to start: `python3 main.py` The console will ask for a repository URL. To test, I used my repository explaining [how to build a ChatGPT plugin using Express.js](https://github.com/soos3d/chatgpt-plugin-development-quickstart-express), so this only has `.js` extensions. This is how the console will look during the interaction: ```python Please enter the GitHub repository URL: https://github.com/soos3d/chatgpt-plugin-development-quickstart-express Loading chatgpt-plugin-development-quickstart-express repository by soos3d Documents uploaded: {'file_path': 'README.md', 'file_name': 'README.md'} {'file_path': 'index.js', 'file_name': 'index.js'} {'file_path': 'src/app.js', 'file_name': 'app.js'} Uploading to vector store... Your Deep Lake dataset has been successfully created! Dataset(path='hub://YOUR_ORG/repository_db', tensors=['embedding', 'id', 'metadata', 'text']) tensor htype shape dtype compression ------- ------- ------- ------- ------- embedding embedding (5, 1536) float32 None id text (5, 1) str None metadata json (5, 1) str None text text (5, 1) str None Test question: What is the repository about? ================================================== Answer: The repository is a ChatGPT Plugin Quickstart with Express.js. It provides a foundation for developing custom ChatGPT plugins using JavaScript and Express.js. The sample plugin in the repository showcases how ChatGPT can integrate with external APIs, specifically API-Ninja's API, to enhance its capabilities. The plugin fetches airport data based on a city name provided by the user. Please enter your question (or type 'exit' to quit): how does the server work? Your question: how does the server work? ================================================== Answer: The server in this context works by setting up an Express.js server with various endpoints to serve a ChatGPT plugin. It initializes the server, configures it to parse JSON in the body of incoming requests, and sets up routes for serving the plugin manifest, OpenAPI schema, logo image, and handling API requests. It also defines a catch-all route to handle any other requests. Finally, the server starts and listens for requests on the specified port. Please enter your question (or type 'exit' to quit): exit Exiting, thanks for chatting! ```",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 159,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## Diving Deeper: LlamaIndex's Low-Level API for customizations While the high-level API of LlamaIndex offers a seamless experience for most use cases, there might be situations where you need more granular control over the query logic. This is where the low-level API shines, offering customizations to fine-tune your interactions with the indexed data. ### Building the Index To start, you'll need to build an index from your documents, the same as we have done so far. ```python # Create an index of the documents try: vector_store = DeepLakeVectorStore( dataset_path=dataset_path, overwrite=True, runtime={""tensor_db"": True}, ) except Exception as e: print(f""An unexpected error occurred while creating or fetching the vector store: {str(e)}"") storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents(docs, storage_context=storage_context) ``` ### Configuring the Retriever The retriever is responsible for fetching relevant nodes from the index. LlamaIndex supports various retrieval modes, allowing you to choose the one that best fits your needs: ```python from llama_index.retrievers import VectorIndexRetriever retriever = VectorIndexRetriever(index=index, similarity_top_k=4) ``` Let's break this down: In modern retrieval systems, documents and queries are represented as vectors, often generated by machine learning models. When a query is made, its vector is compared to document vectors in the index using metrics like cosine similarity. The documents are then ranked based on their similarity scores to the query. **Top-k Retrieval**: Instead of returning all the documents sorted by their similarity, often only the top `k` most similar documents are of interest. This is where `similarity_top_k` comes into play. If `similarity_top_k=4`, the system will retrieve the top 4 most similar documents to the given query. In the context of the code: ```python retriever = VectorIndexRetriever(index=index, similarity_top_k=4) ``` The `VectorIndexRetriever` is configured to retrieve the top 4 documents most similar to any given query. Benefits of using `similarity_top_k`: - **Efficiency**: Retrieving only the top-k results can be faster and more memory-efficient than retrieving all results, especially when dealing with large datasets. - **Relevance**: Users are primarily interested in the most relevant results in many applications. By focusing on the top-k results, the system can provide the most pertinent information without overwhelming the user with less relevant results. However, choosing an appropriate value for `k` is essential. Some relevant results might be missed if `k` is too small. If `k` is too large, the system might return more results than necessary, which could be less efficient and potentially less helpful to the user. While creating this guide, I noticed that using a value above '4' for the parameter led the LLM to produce off-context responses. ### Customize the query engine In LLamaIndex, passing extra parameters and customizations to the query engine is possible. ```python from llama_index import get_response_synthesizer response_synthesizer = get_response_synthesizer() query_engine = RetrieverQueryEngine.from_args( retriever=retriever, response_mode='default', response_synthesizer=response_synthesizer, node_postprocessors=[ SimilarityPostprocessor(similarity_cutoff=0.7)] ) ``` Let's break down this customization: **Getting the Response Synthesizer**: ```python response_synthesizer = get_response_synthesizer() ``` Here, the `get_response_synthesizer` function is called to get an instance of the response synthesizer. The query engine will use this synthesizer to combine and refine the information retrieved by the retriever. **Configuring the Query Engine**: ```python query_engine = RetrieverQueryEngine.from_args( retriever=retriever, response_mode='default', response_synthesizer=response_synthesizer, node_postprocessors=[ SimilarityPostprocessor(similarity_cutoff=0.7)] ) ``` This section configures and initializes the query engine with the following components and settings: - `retriever`: This component fetches relevant nodes (or documents) based on the query. It's passed as an argument, and we set it up in the previous step. - `response_mode='default'`: This sets the mode in which the response will be synthesized. The 'default' mode means the system will ""create and refine"" an answer by sequentially going through each retrieved node, making a separate LLM call for each node. This mode is suitable for generating more detailed explanations. - `response_synthesizer=response_synthesizer`: The previously obtained response",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 160,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## Diving Deeper: LlamaIndex's Low-Level API for customizations synthesizer is passed to the query engine. This component will generate the final response using the specified `response_mode`. - `node_postprocessors`: This is a list of postprocessors that can be applied to the nodes retrieved by the retriever. A `SimilarityPostprocessor` with a `similarity_cutoff` of 0.7 is used in this case. This postprocessor filters out nodes based on their similarity scores, ensuring only nodes with a similarity score above 0.7 are considered. All of those are optional except for the retriever; I recommend testing the various modes and values to find what is more suitable for your use case. ### Exploring Different Response Modes The `RetrieverQueryEngine` offers various response modes to tailor the synthesis of responses based on the retrieved nodes. Here's a breakdown of some of the available modes: 1. **Default Mode**: ```python query_engine = RetrieverQueryEngine.from_args(retriever, response_mode='default') ``` In the default mode, the system processes each retrieved node sequentially, making a separate LLM call for each one. This mode is suitable for generating detailed answers. 2. **Compact Mode**: ```python query_engine = RetrieverQueryEngine.from_args(retriever, response_mode='compact') ``` The compact mode fits as many node text chunks as possible within the maximum prompt size during each LLM call. If there are too many chunks, it refines the answer by processing multiple prompts. 3. **Tree Summarize Mode**: ```python query_engine = RetrieverQueryEngine.from_args(retriever, response_mode='tree_summarize') ``` This mode constructs a tree from a set of node objects, and the query then returns the root node as the response. It's beneficial for summarization tasks. 4. **No Text Mode**: ```python query_engine = RetrieverQueryEngine.from_args(retriever, response_mode='no_text') ``` In the no-text mode, the retriever fetches the nodes that would have been sent to the LLM but doesn't send them. This mode allows for inspecting the retrieved nodes without generating a synthesized response.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 161,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## How does LLamaIndex compare to LangChain? Now that you understand how LLamaIndex works comparing it with the big-name LangChain is an excellent time. LlamaIndex and Langchain are both frameworks/libraries designed to enhance the capabilities of large language models by allowing them to interact with external data. While they serve similar overarching goals, their approaches and features differ. **Key Features**: *Llama Index*: - **Data Connectors**: Offers a variety of connectors to ingest data, making it versatile for different data sources. LLama Hub is an excellent source of community-made tools. - **Data Structuring**: Allows users to structure data using various index types, such as list index, tree index, and even the ability to compose indices. - **Query Abstraction**: Provides layers of abstraction, enabling both simple and complex data querying. *Langchain*: - **Modular Design**: Comes with modules for tools, agents, and chains, each serving a distinct purpose. - **Chains**: A series of steps or actions the language model takes, ideal for tasks requiring multiple interactions. - **Agents**: Autonomous entities that can decide the next steps in a process, adding a layer of decision-making to the model's interactions. - **Tools**: Utility agents are used to perform specific tasks, such as searching or querying an index. **Use Cases**: - *The Llama Index* is best suited for applications that require complex data structures and querying. Its strength lies in handling and querying structured data. - *Langchain*, on the other hand, excels in scenarios that require multiple interactions with a language model, especially when those interactions involve decision-making or a series of steps. Of course, you can combine the two; the potential of combining LlamaIndex and Langchain is promising. By integrating LlamaIndex's structured data querying capabilities with the multi-step interaction and decision-making features of Langchain, developers can create robust and versatile applications. As mentioned in the conclusion, This combination can offer the best of both worlds. So which one to use? Use the tool that makes it easier to take care of your use cases; this is usually the leading factor in deciding which one I want to use.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 162,Chat with Your Code LlamaIndex and Activeloop Deep,"# Chat with Your Code: LlamaIndex and Activeloop Deep Lake for GitHub Repositories 🦙🌊 ## Conclusion In this comprehensive guide, we've journeyed through the intricacies of integrating LlamaIndex with Activeloop Deep Lake to create a conversational interface for GitHub repositories. We've seen how these powerful tools can transform a static codebase into an interactive, queryable entity. The synergy between LlamaIndex's data structuring and Deep Lake's optimized storage offers a robust solution for managing and understanding your code repositories. The code we've explored indexes GitHub repositories and makes them accessible through natural language queries. This opens up a plethora of possibilities. Imagine a future where you don't just browse through code; you have conversations with it. This gave you the basics of using LLamaIndex and Deep Lake; try to improve and customize this app to practice.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51349127-chat-with-your-code-llamaindex-and-activeloop-deep-lake-for-github-repositories 163,Iterative Optimization of LlamaIndex RAG Pipeline,"# Introduction In previous lessons, we learned about advanced techniques and evaluation metrics for LlamaIndex Retrieval-Augmented Generation (RAG) pipelines. Building on this knowledge, we now focus on optimizing a LlamaIndex RAG pipeline through a series of iterative evaluations. We aim to enhance the system's ability to retrieve and generate accurate and relevant information. Here's our step-by-step plan: 1. **Baseline Evaluation**: Construct a standard LlamaIndex RAG pipeline and establish an initial performance baseline. 1. Adjusting TOP_K Retrieval Values: Experiment with different values of k (1, 3, 5, 7) to understand their effect on the accuracy of retrieved information and the relevance of generated answers. 2. **Testing Different Embedding Models**: Evaluate models such as ""text-embedding-ada-002"" and ""cohere/embed-english-v3.0"" to identify the most effective one for our pipeline. 3. **Incorporating a Reranker**: Implement a reranking mechanism to refine the document selection process of the retriever. 4. **Employing a Deep Memory Approach**: Investigate the impact of a deep memory component on the accuracy of information retrieval. Through these steps, we aim to refine our RAG system systematically, enhancing its performance by providing accurate and relevant information. The code for this lesson is also available through a [Colab notebook](https://colab.research.google.com/drive/1BZGMXoQVEt7__6dNfRyBAHXwr4kQw4PZ#scrollTo=I93VcS630cbQ), where you can follow along.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320356-iterative-optimization-of-llamaindex-rag-pipeline-a-step-by-step-approach 164,Iterative Optimization of LlamaIndex RAG Pipeline,"# 1. Baseline evaluation The first step is installing the required Python packages. ```python !pip3 install deeplake llama_index langchain openai tiktoken cohere pandas torch sentence-transformers ``` Here, you can set our API keys. You can skip this step if you plan to use other services. ```python import os os.environ['OPENAI_API_KEY'] = '' os.environ['ACTIVELOOP_TOKEN'] = '' os.environ['COHERE_API_KEY'] = '' ``` We download the data, which is a [single text file](https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt). You can use this or replace it with your own data. ```python !mkdir -p 'data/paul_graham/' !curl 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -o 'data/paul_graham/paul_graham_essay.txt' ``` Let’s load the Data and build LlamaIndex nodes/chunks. ```python from llama_index.node_parser import SimpleNodeParser from llama_index import SimpleDirectoryReader # First we create Document LlamaIndex objects from the text data documents = SimpleDirectoryReader(""./data/paul_graham/"").load_data() node_parser = SimpleNodeParser.from_defaults(chunk_size=512) nodes = node_parser.get_nodes_from_documents(documents) # By default, the node/chunks ids are set to random uuids. To ensure same id's per run, we manually set them. for idx, node in enumerate(nodes): node.id_ = f""node_{idx}"" print(f""Number of Documents: {len(documents)}"") print(f""Number of nodes: {len(nodes)} with the current chunk size of {node_parser.chunk_size}"") ``` ``` Number of Documents: 1 Number of nodes: 58 with the current chunk size of 512 ``` The next step is to create a LlamaIndex `VectorStoreIndex` object and use a `DeepLakeVectorStore` to store the vector embeddings. We also choose `gpt-3.5-turbo-1106` as our LLM and OpenAI’s embedding model `text-embedding-ada-002` ```python from llama_index import VectorStoreIndex, ServiceContext, StorageContext from llama_index.vector_stores import DeepLakeVectorStore from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.llms import OpenAI # Create a local Deep Lake VectorStore dataset_path = ""./data/paul_graham/deep_lake_db"" vector_store = DeepLakeVectorStore(dataset_path=dataset_path, overwrite=True, exec_option=""compute_engine"") # LLM that will answer questions with the retrieved context llm = OpenAI(model=""gpt-3.5-turbo-1106"") # We use OpenAI's embedding model ""text-embedding-ada-002"" embed_model = OpenAIEmbedding() service_context = ServiceContext.from_defaults(embed_model=embed_model, llm=llm,) storage_context = StorageContext.from_defaults(vector_store=vector_store) vector_index = VectorStoreIndex(nodes, service_context=service_context, storage_context=storage_context, show_progress=True) ``` ``` Generating embeddings: 100% 58/58 [00:06<00:00, 8.75it/s] Uploading data to deeplake dataset. 100%|██████████| 58/58 [00:00<00:00, 169.79it/s]Dataset(path='./data/paul_graham/deep_lake_db', tensors=['text', 'metadata', 'embedding', 'id']) tensor htype shape dtype compression ------- ------- ------- ------- ------- text text (58, 1) str None metadata json (58, 1) str None embedding embedding (58, 1536) float32 None id text (58, 1) str None ``` With the vector index, we can now build a `QueryEngine`, which generates answers with the LLM and the retrieved chunks of text. ```python query_engine = vector_index.as_query_engine(similarity_top_k=10) response_vector = query_engine.query(""What are the main things Paul worked on before college?"") print(response_vector.response) ``` ``` Before college, Paul worked on writing and programming. ``` Now that we have a simple RAG pipeline, we can evaluate it. For that, we need a dataset. Since we don’t have one, we will generate one. `LlamaIndex` offers a `generate_question_context_pairs` module specifically for generating questions and context pairs. We will use that dataset to assess the RAG chunk retrieval and response capabilities. Let’s also save the generated dataset in JSON format for later use. In this case we only generate **58 question and context pairs**, but you can increase the number of samples in the dataset for a more thorough evaluation. ```python from llama_index.evaluation import generate_question_context_pairs qc_dataset = generate_question_context_pairs( nodes, llm=llm, num_questions_per_chunk=1 ) # We can save the dataset as a json file for later use. qc_dataset.save_json(""qc_dataset.json"") ``` ``` 100%|██████████| 58/58 [01:30<00:00, 1.56s/it] ``` You can load the dataset from your local disk if you have already generated it. ```python from llama_index.finetuning.embeddings.common import ( EmbeddingQAFinetuneDataset, ) qc_dataset = EmbeddingQAFinetuneDataset.from_json( ""qc_dataset.json"" ) ``` ```python DEFAULT_QA_GENERATE_PROMPT_TMPL = """"""\ Context information is below. --------------------- {context_str} --------------------- Given the context information and not prior knowledge. generate only questions based on the below query. You are a Teacher/ Professor. Your task is to setup \ {num_questions_per_chunk} questions for an upcoming \ quiz/examination. The questions should be diverse in nature \ across the document. Restrict the questions to the \ context information provided."" """""" ``` With the generated dataset, we can first start with the retrieval evaluations. We will use the `RetrieverEvaluator` class available in LlamaIndex to measure the Hit Rate and Mean Reciprocal Rank (MRR). **Hit Rate:** Think of the Hit Rate as playing a game of guessing. You're given a question and need to guess the correct answer from a list of options. The Hit Rate measures how often you guess the correct answer by only looking at your top few guesses. You have a high Hit Rate if you often find the right answer in your first few guesses. So, in a retrieval system, it's about how frequently the system finds the correct document within its top 'k' picks (where 'k' is a number you decide, like top 5 or top 10). **Mean Reciprocal Rank (MRR):** MRR is like measuring how quickly you can find a treasure in a list of boxes. Imagine you have a row of boxes, and only one has a treasure. The MRR calculates how close to the start of the row the treasure box is, on average. If the treasure is always in the first box you open, you're doing great and have an MRR of 1. If it's in the second box, the score is 1/2, since you took two tries to find it. If it's in the third box, your score is 1/3, and so on. MRR averages these scores across all your searches. So, for a retrieval system, MRR looks at where the correct document ranks in the system's guesses. If it's usually near the top, the MRR will be high, indicating good performance. In summary, **Hit Rate tells you how often the system gets it right in its top guesses**, and **MRR tells you how close to the top the right answer usually is**. Both metrics are useful for evaluating the effectiveness of a retrieval system, like how well a search engine or a recommendation system works. First, we define a function to display the Retrieval evaluation results in table format. ```python import pandas as pd def display_results_retriever(name, eval_results): """"""Display results from evaluate."""""" metric_dicts = [] for eval_result in eval_results: metric_dict = eval_result.metric_vals_dict metric_dicts.append(metric_dict) full_df = pd.DataFrame(metric_dicts) hit_rate = full_df[""hit_rate""].mean() mrr = full_df[""mrr""].mean() metric_df = pd.DataFrame( {""Retriever Name"": [name], ""Hit Rate"": [hit_rate], ""MRR"": [mrr]} ) return metric_df ``` Then, Run the evaluation procedure. ```python from llama_index.evaluation import RetrieverEvaluator # We can evaluate the retievers with different top_k values. for i in [2, 4, 6, 8, 10]: retriever = vector_index.as_retriever(similarity_top_k=i) retriever_evaluator = RetrieverEvaluator.from_metric_names( [""mrr"", ""hit_rate""], retriever=retriever ) eval_results = await retriever_evaluator.aevaluate_dataset(qc_dataset) print(display_results_retriever(f""Retriever top_{i}"", eval_results)) ``` ``` Retriever Name Hit Rate MRR 0 Retriever top_2 0.827586 0.702586 Retriever Name Hit Rate MRR 0 Retriever top_4 0.913793 0.729167 Retriever Name Hit Rate MRR 0 Retriever top_6 0.922414 0.730891 Retriever Name Hit Rate MRR 0 Retriever top_8 0.956897 0.735509 Retriever Name Hit Rate MRR 0 Retriever top_10 0.982759 0.738407 ``` We notice that the Hit Rate increases as the top_k value increases, which is what we can expect. We're increasing the probability of the correct answer being included in the returned set. But how does that impact the quality of",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320356-iterative-optimization-of-llamaindex-rag-pipeline-a-step-by-step-approach 166,Iterative Optimization of LlamaIndex RAG Pipeline,"# 1. Baseline evaluation the generated answers?",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320356-iterative-optimization-of-llamaindex-rag-pipeline-a-step-by-step-approach 167,Iterative Optimization of LlamaIndex RAG Pipeline,"# 1. Baseline evaluation ## Evaluation for Relevancy and Faithfulness metrics. **Relevancy** evaluates whether the retrieved context and answer are relevant to the query. **Faithfulness** evaluates if the answer is faithful to the retrieved contexts or, in other words, whether there’s a hallucination. LlamaIndex includes functions that evaluate both metrics using an LLM as the judge. **GPT4 will be used as the judge.** Now, let's see how the top_k value affects these two metrics. ```python from llama_index.evaluation import RelevancyEvaluator, FaithfulnessEvaluator, BatchEvalRunner for i in [2, 4, 6, 8, 10]: # Set Faithfulness and Relevancy evaluators query_engine = vector_index.as_query_engine(similarity_top_k=i) # While we use GPT3.5-Turbo to answer questions # we can use GPT4 to evaluate the answers. llm_gpt4 = OpenAI(temperature=0, model=""gpt-4-1106-preview"") service_context_gpt4 = ServiceContext.from_defaults(llm=llm_gpt4) faithfulness_evaluator = FaithfulnessEvaluator(service_context=service_context_gpt4) relevancy_evaluator = RelevancyEvaluator(service_context=service_context_gpt4) # Run evaluation queries = list(qc_dataset.queries.values()) batch_eval_queries = queries[:20] runner = BatchEvalRunner( {""faithfulness"": faithfulness_evaluator, ""relevancy"": relevancy_evaluator}, workers=8, ) eval_results = await runner.aevaluate_queries( query_engine, queries=batch_eval_queries ) faithfulness_score = sum(result.passing for result in eval_results['faithfulness']) / len(eval_results['faithfulness']) print(f""top_{i} faithfulness_score: {faithfulness_score}"") relevancy_score = sum(result.passing for result in eval_results['faithfulness']) / len(eval_results['relevancy']) print(f""top_{i} relevancy_score: {relevancy_score}"") ``` ``` top_2 faithfulness_score: 0.95 top_2 relevancy_score: 0.95 top_4 faithfulness_score: 0.95 top_4 relevancy_score: 0.95 top_6 faithfulness_score: 0.95 top_6 relevancy_score: 0.95 top_8 faithfulness_score: 1.0 top_8 relevancy_score: 1.0 top_10 faithfulness_score: 1.0 top_10 relevancy_score: 1.0 ``` We can notice the relevancy and faithfulness scores increase as the Top_k value increases. We also get a perfect score using eight retrieved chunks as context. This is the LlamaIndex Relevancy prompt default template. ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320356-iterative-optimization-of-llamaindex-rag-pipeline-a-step-by-step-approach 168,Iterative Optimization of LlamaIndex RAG Pipeline,"# 2. **Changing the embedding model** Now that we have the baseline evaluation score, we can start changing some modules of our LlamaIndex RAG pipeline. We can start by changing the embedding model. Here, we will be testing the [cohere embedding model](https://docs.llamaindex.ai/en/stable/examples/embeddings/cohereai.html) `embed-english-v3.0` instead of OpenAI’s `text-embedding-ada-002`. ```python import os from llama_index import VectorStoreIndex, ServiceContext, StorageContext from llama_index.vector_stores import DeepLakeVectorStore from llama_index.embeddings.cohereai import CohereEmbedding from llama_index.llms import OpenAI # Create another local DeepLakeVectorStore to store the embeddings dataset_path = ""./data/paul_graham/deep_lake_db_1"" vector_store = DeepLakeVectorStore(dataset_path=dataset_path, overwrite=False, exec_option=""compute_engine"") llm = OpenAI(model=""gpt-3.5-turbo-1106"") embed_model = CohereEmbedding( cohere_api_key=os.getenv('COHERE_API_KEY'), model_name=""embed-english-v3.0"", input_type=""search_document"", ) service_context = ServiceContext.from_defaults(embed_model=embed_model, llm=llm,) storage_context = StorageContext.from_defaults(vector_store=vector_store) vector_index = VectorStoreIndex(nodes, service_context=service_context, storage_context=storage_context, show_progress=True) ``` ``` Generating embeddings: 100% 58/58 [00:02<00:00, 23.68it/s] Uploading data to deeplake dataset. 100%|██████████| 58/58 [00:00<00:00, 315.69it/s]Dataset(path='./data/paul_graham/deep_lake_db_1', tensors=['text', 'metadata', 'embedding', 'id']) tensor htype shape dtype compression ------- ------- ------- ------- ------- text text (58, 1) str None metadata json (58, 1) str None embedding embedding (58, 1024) float32 None id text (58, 1) str None ``` We run the retrieval evaluation using these new embeddings. ```python from llama_index.evaluation import RetrieverEvaluator embed_model.input_type = ""search_query"" retriever = vector_index.as_retriever(similarity_top_k=10, embed_model=embed_model) retriever_evaluator = RetrieverEvaluator.from_metric_names( [""mrr"", ""hit_rate""], retriever=retriever ) eval_results = await retriever_evaluator.aevaluate_dataset(qc_dataset) print(display_results_retriever(f""Retriever_cohere_embeds"", eval_results)) ``` ``` Retriever Name Hit Rate MRR 0 Retriever_cohere_embeds 0.965517 0.754823 ``` These embeddings show a lower Hit Rate but a better MRR value. ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320356-iterative-optimization-of-llamaindex-rag-pipeline-a-step-by-step-approach 169,Iterative Optimization of LlamaIndex RAG Pipeline,"# 3. **Incorporating a Reranker** Here, we will be testing three different Rerankers that we learned about in previous lessons. - `cross-encoder/ms-marco-MiniLM-L-6-v2` from the [Hugging Face hub](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L-6-v2). - [LlamaIndex’s](https://docs.llamaindex.ai/en/stable/examples/node_postprocessor/LLMReranker-Gatsby.html) `LLMRerank` - [Cohere’s](https://docs.cohere.com/docs/reranking) `CohereRerank`. ```python from llama_index.postprocessor.cohere_rerank import CohereRerank from llama_index.indices.postprocessor import SentenceTransformerRerank, LLMRerank st_reranker = SentenceTransformerRerank( top_n=5, model=""cross-encoder/ms-marco-MiniLM-L-6-v2"" ) llm_reranker = LLMRerank( choice_batch_size=4, top_n=5, ) cohere_rerank = CohereRerank(api_key=os.getenv('COHERE_API_KEY'), top_n=10) for reranker in [cohere_rerank, st_reranker, llm_reranker]: retriever_with_reranker = vector_index.as_retriever(similarity_top_k=10, postprocessor=reranker, embed_model=embed_model) retriever_evaluator_1 = RetrieverEvaluator.from_metric_names( [""mrr"", ""hit_rate""], retriever=retriever_with_reranker ) eval_results1 = await retriever_evaluator_1.aevaluate_dataset(qc_dataset) print(display_results_retriever(""Retriever with added Reranker"", eval_results1)) ``` ``` config.json: 100% 794/794 [00:00<00:00, 23.6kB/s] pytorch_model.bin: 100% 90.9M/90.9M [00:00<00:00, 145MB/s] tokenizer_config.json: 100% 316/316 [00:00<00:00, 11.0kB/s] vocab.txt: 100% 232k/232k [00:00<00:00, 3.79MB/s] special_tokens_map.json: 100% 112/112 [00:00<00:00, 3.88kB/s] Retriever Name Hit Rate MRR 0 Retriever with added Reranker 0.965517 0.754823 Retriever Name Hit Rate MRR 0 Retriever with added Reranker 0.965517 0.754823 Retriever Name Hit Rate MRR 0 Retriever with added Reranker 0.965517 0.754823 ``` Here, we unfortunately don't see a significant improvement in the retriever's performance. We suspect it is mainly caused by the evaluation dataset we’ve built. Rerankers can nonetheless offer great benefits depending on your application and are easy to implement.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320356-iterative-optimization-of-llamaindex-rag-pipeline-a-step-by-step-approach 170,Iterative Optimization of LlamaIndex RAG Pipeline,"# 4. **Employing Deep Memory** [Activeloop's Deep Memory](https://www.activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/) is a feature that introduces a tiny neural network layer trained to match user queries with relevant data from a corpus. While this addition incurs minimal latency during search, it can **boost retrieval accuracy by up to 27%.** First, let's reuse and convert our generated dataset into a format Deep Memory expects. We need queries and relevant IDs. ```python def create_query_relevance(qa_dataset): """"""Function for converting LlamaIndex dataset to correct format for deep memory training"""""" queries = [text for _, text in qa_dataset.queries.items()] relevant_docs = qa_dataset.relevant_docs relevance = [] for doc in relevant_docs: relevance.append([(relevant_docs[doc][0], 1)]) return queries, relevance train_queries, train_relevance = create_query_relevance(qc_dataset) print(len(train_queries)) ``` Now, let's upload our baseline Vectore Store on Activeloop's cloud platform and convert it into a managed database. ```python import deeplake local = ""./data/paul_graham/deep_lake_db"" hub_path = ""hub://genai360/optimization_paul_graham"" hub_managed_path = ""hub://genai360/optimization_paul_graham_managed"" # First upload our local vector store deeplake.deepcopy(local, hub_path, overwrite=True) # Create a managed vector store deeplake.deepcopy(hub_path, hub_managed_path, overwrite=True, runtime={""tensor_db"": True}) ``` You can replace the paths using your organization name and database name. Let’s create a LlamaIndex RAG pipeline using our new managed vector store. ```python import os from llama_index import VectorStoreIndex, ServiceContext, StorageContext from llama_index.vector_stores import DeepLakeVectorStore from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.llms import OpenAI vector_store = DeepLakeVectorStore(dataset_path=hub_managed_path, overwrite=False, runtime={""tensor_db"": True}, read_only=True) llm = OpenAI(model=""gpt-3.5-turbo-1106"") embed_model = OpenAIEmbedding() service_context = ServiceContext.from_defaults(embed_model=embed_model, llm=llm,) storage_context = StorageContext.from_defaults(vector_store=vector_store) vector_index = VectorStoreIndex.from_vector_store(vector_store,service_context=service_context, storage_context=storage_context, use_async=False, show_progress=True) ``` ``` Deep Lake Dataset in hub://genai360/optimization_paul_graham_managed already exists, loading from the storage ``` And now we can launch the Deep Memory training. ```python from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() job_id = vector_store.vectorstore.deep_memory.train( queries=train_queries, relevance=train_relevance, embedding_function=embeddings.embed_documents, ) ``` ``` Your Deep Lake dataset has been successfully created! creating embeddings: 100%|██████████| 1/1 [00:02<00:00, 2.27s/it] 100%|██████████| 100/100 [00:00<00:00, 158.16it/s] Dataset(path='hub://genai360/optimization_paul_graham_managed_queries', tensors=['text', 'metadata', 'embedding', 'id']) tensor htype shape dtype compression ------- ------- ------- ------- ------- text text (100, 1) str None metadata json (100, 1) str None embedding embedding (100, 1536) float32 None id text (100, 1) str None DeepMemory training job started. Job ID: 652dceeed7d1579bf6abf3df ``` With the job_id, you can keep track of deep memory training. ```python vector_store.vectorstore.deep_memory.status('652dceeed7d1579bf6abf3df') ``` To evaluate our Deep Memory-enabled vector store, we can generate a test dataset. Here, we only send 20 chunks to make things fast, but a bigger dataset size would be recommended for a stronger evaluation. ```python from llama_index.evaluation import generate_question_context_pairs # Generate test dataset test_dataset = generate_question_context_pairs( nodes[:20], llm=llm, num_questions_per_chunk=1 ) test_dataset.save_json(""test_dataset.json"") # We can also load the dataset from a json file if already done previously. from llama_index.finetuning.embeddings.common import ( EmbeddingQAFinetuneDataset, ) test_dataset = EmbeddingQAFinetuneDataset.from_json( ""test_dataset.json"" ) test_queries, test_relevance = create_query_relevance(test_dataset) ``` ``` 100%|██████████| 20/20 [00:29<00:00, 1.49s/it] ``` Let’s evaluate the recall on the generated test dataset using the Deep Lakes evaluation Python function. Recall measures the proportion of relevant items successfully retrieved by the system from all relevant items available in the dataset. **Formula**: Recall is calculated as: $$ \text{Recall} = \frac{\text{Number of Relevant Items Retrieved}}{\text{Total Number of Relevant Items in the Dataset}} $$ It focuses on the system's ability to find all relevant items. A high recall means the system is good at not missing relevant items. ```python # Evaluate recall on the generated test dataset recalls = vector_store.vectorstore.deep_memory.evaluate( queries=test_queries, relevance=test_relevance, embedding_function=embeddings.embed_documents, ) ``` ``` Embedding queries took 1.24 seconds ---- Evaluating without Deep Memory ---- Recall@1: 55.2% Recall@3: 87.1% Recall@5: 90.5% Recall@10: 97.4% Recall@50: 100.0% Recall@100: 100.0% ---- Evaluating with Deep Memory ---- Recall@1: 56.0% Recall@3:",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320356-iterative-optimization-of-llamaindex-rag-pipeline-a-step-by-step-approach 171,Iterative Optimization of LlamaIndex RAG Pipeline,"# 4. **Employing Deep Memory** 87.1% Recall@5: 92.2% Recall@10: 99.1% Recall@50: 100.0% Recall@100: 100.0% ``` Now, let’s get the Hit Rate and MRR scores of our Deep Memory enabled vector store. We start measuring the Hit Rate and MRR of our base vector store: ```python import os from llama_index.postprocessor.cohere_rerank import CohereRerank from llama_index.evaluation import ( RetrieverEvaluator, ) base_retriever = vector_index.as_retriever(similarity_top_k=10) deep_memory_retriever = vector_index.as_retriever( similarity_top_k=10, vector_store_kwargs={""deep_memory"": True} ) base_retriever_evaluator = RetrieverEvaluator.from_metric_names( [""mrr"", ""hit_rate""], retriever=base_retriever ) eval_results = await base_retriever_evaluator.aevaluate_dataset(test_dataset) print(display_results_retriever(""Retriever Results"", eval_results)) ``` ``` Retriever Name Hit Rate MRR 0 Retriever Results 0.974138 0.717809 ``` Now, the same evaluation for the Deep Memory Vector Store ```python deep_memory_retriever = vector_index.as_retriever( similarity_top_k=10, vector_store_kwargs={""deep_memory"": True} ) dm_retriever_evaluator = RetrieverEvaluator.from_metric_names( [""mrr"", ""hit_rate""], retriever=deep_memory_retriever ) dm_eval_results = await dm_retriever_evaluator.aevaluate_dataset(test_dataset) print(display_results_retriever(""Retriever Results"", dm_eval_results)) ``` ``` Retriever Name Hit Rate MRR 0 Retriever Results 0.991379 0.72865 ``` We can see a small increase in the MRR score compared to the baseline RAG pipeline while our Hit Rate stays the same. Note that this is again mainly due to our evaluation test set and the fact that we only chose 20 chunks. You can experiment with more chunks or manually build a different test set for improved results, especially in your application!",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320356-iterative-optimization-of-llamaindex-rag-pipeline-a-step-by-step-approach 172,Iterative Optimization of LlamaIndex RAG Pipeline,"# Conclusion In this lesson, optimizing a LlamaIndex RAG pipeline involved a structured approach to improve information retrieval and generation quality. We adjusted retrieval top_k values, evaluated two embedding models, introduced reranking mechanisms, and integrated Active Loop’s Deep Memory, some leading to performance enhancements. The improvements were somewhat negligible in this short demo. Still, it is crucial to try these more advanced improvements as they could have high impacts on a real application and improved evaluation dataset. We also highlight the importance of a good evaluation set of tools, such as a well-curated and large enough evaluation dataset. --- **RESOURCES** - Colab notebook for the lesson: [Google Colaboratory](https://colab.research.google.com/drive/1BZGMXoQVEt7__6dNfRyBAHXwr4kQw4PZ#scrollTo=I93VcS630cbQ) - LlamaIndex and Deep Memory integration: [How to get +15% RAG hit_rate improvement for question answering on documentation? - LlamaIndex 🦙 0.9.15.post2](https://docs.llamaindex.ai/en/stable/examples/retrievers/deep_memory.html) --- This lesson is based on the Llamaindex AI-engineer-workshop https://github.com/run-llama/ai-engineer-workshop posted by [Disiok](https://github.com/Disiok). [GitHub - run-llama/ai-engineer-workshop](https://github.com/run-llama/ai-engineer-workshop)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320356-iterative-optimization-of-llamaindex-rag-pipeline-a-step-by-step-approach 173,RAG - Metrics & Evaluation,"# **Introduction** So far, we've observed that LLMs' response generation ability is improved by incorporating context from a vector database, a typical design approach used in RAG systems for chatbots and question-answering systems. RAG applications strive to produce factually grounded outputs supported by the context they retrieve. Creating an evaluation pipeline for these systems is crucial, as it allows for measuring the effectiveness of the added techniques. The evaluation process should focus on ensuring that the output meaningfully incorporates the context, avoiding mere repetition, and aiming to create responses that are comprehensive, non-repetitive, and devoid of redundancy.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 174,RAG - Metrics & Evaluation,"# **RAG Metrics** A holistic approach in RAG system evaluation is to present a detailed assessment of individual components and the system as a whole. Setting the baseline values for elements like chunking logic and embedding models and then examining each part independently and in an end-to-end manner is key for understanding the impact of modifications on the system’s overall performance. The holistic modules don't always need ground-truth labels, as they can be evaluated based on the query, context, response, and LLM interpretations. Here are five metrics commonly used to evaluate RAG systems: - **[Correctness](https://docs.llamaindex.ai/en/latest/examples/evaluation/correctness_eval.html)**: Checks if the answer generated matches the reference answer for the given query (labels required). The accuracy of the generated answer is verified by comparing it directly to a reference answer provided for the question. - **[Faithfulness](https://docs.llamaindex.ai/en/latest/examples/evaluation/faithfulness_eval.html)**: Determines if the answer is accurate and doesn't contain fabrication, relative to the retrieved contexts. The faithfulness metric evaluates the integrity of the answer, ensuring it faithfully represents the information in the retrieved context by checking that the answer is accurate and free from distortions or fabrications that could misrepresent the source material. - **[Context Relevancy](https://docs.llamaindex.ai/en/latest/examples/evaluation/relevancy_eval.html)**: Measures the relevance of the retrieved context and the resulting answer to the original query. It does so by ensuring that the system only retrieves information in a way that is pertinent to the user's request. - **[Guideline Adherence](https://docs.llamaindex.ai/en/latest/examples/evaluation/guideline_eval.html)**: Determines if the predicted answer follows a set of guidelines and whether the response meets predefined criteria, encompassing stylistic, factual, and ethical standards, so the answer responds to the query while also aligning with specific established norms. - [**Embedding Semantic Similarity](https://docs.llamaindex.ai/en/latest/examples/evaluation/semantic_similarity_eval.html#):** Calculates the similarity score between embeddings of the generated answer and the reference answer (reference labels required). The analysis starts broadly, focusing on the overarching goal of RAG applications to produce helpful outputs supported by contextually relevant facts from the retrievers. It is then narrowed down to specific evaluation metrics, including faithfulness, answer relevancy, and the [Sensibleness and Specificity Average (SSA)](https://arxiv.org/abs/2001.09977), focusing on avoiding hallucination in responses. Google's SSA metric evaluates open-domain chatbot responses for sensibleness (contextual coherence) and specificity (detailed and direct responses). Initially involving human evaluators, this approach aims to ensure outputs are comprehensive yet not overly vague. A high faithfulness score does not guarantee high relevance. For example, an answer that accurately reflects the context but lacks direct relevance to the question would score lower in answer relevance, especially if it includes incomplete or redundant information. ### Faithfulness Evaluator Avoiding vague responses is essential, but preventing LLMs from “hallucinating” is equally crucial. Hallucination refers to generating responses not grounded in factual content or context. LlamaIndex's `FaithfulnessEvaluator` assesses responses based on their alignment with the retrieved context, measuring this aspect. Faithfulness evaluation considers whether the response matches the retrieved context, aligns with the query, and adheres to the reference answer or guidelines. The result returns a boolean value indicating whether the response passed the accuracy and faithfulness checks. To execute the following codes, we must initially install the necessary libraries. This can be done using the Python package manager. Afterward, set the API keys for both OpenAI and the Activeloop service, as we will access indexes from a dataset hosted on Deep Lake. Remember to replace the placeholders with your API keys. The code for this lesson is also available through a [Colab notebook](https://colab.research.google.com/drive/1XcCcoUi5ZbUNmTCLYXL51zeD-6mXON5T?usp=sharing), where you can follow along. ```bash pip install -q llama-index==0.9.14.post3 deeplake==3.8.12 openai==1.3.8 cohere==4.37 ``` ```python import os os.environ[""OPENAI_API_KEY""] = """" os.environ[""ACTIVELOOP_TOKEN""] = """" ``` Here's an example illustrating how to evaluate a single response for faithfulness. ```python from llama_index import ServiceContext from llama_index.llms import OpenAI # build service context",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 175,RAG - Metrics & Evaluation,"# **RAG Metrics** llm = OpenAI(model=""gpt-4"", temperature=0.0) service_context = ServiceContext.from_defaults(llm=llm) from llama_index.vector_stores import DeepLakeVectorStore from llama_index.storage.storage_context import StorageContext from llama_index import VectorStoreIndex vector_store = DeepLakeVectorStore(dataset_path=""hub://genai360/LlamaIndex_paulgraham_essay"", overwrite=False) storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_vector_store( vector_store, storage_context=storage_context ) from llama_index.evaluation import FaithfulnessEvaluator # define evaluator evaluator = FaithfulnessEvaluator(service_context=service_context) # query index query_engine = index.as_query_engine() response = query_engine.query( ""What does Paul Graham do?"" ) eval_result = evaluator.evaluate_response(response=response) print( ""> response:"", response ) print( ""> evaluator result:"", eval_result.passing ) ``` ``` > response: Paul Graham is involved in various activities. He is a writer and has given talks on topics such as starting a startup. He has also worked on software development, including creating software for generating websites and building online stores. Additionally, he has been a studio assistant for a beloved teacher who is a painter. > evaluator result: True ``` Most of the previously mentioned code should be recognizable, as it involves generating an index from the Deep Lake vector store, using this index to query the LLM, and conducting the evaluation procedure. The query engine processes the question, and its response is then forwarded to the evaluator for analysis. Let’s focus on the evaluation process, which starts by setting up an evaluator to assess the accuracy of responses based on the service context. - The code initializes a `FaithfulnessEvaluator`, a tool designed to assess the accuracy of responses generated by the language model (GPT-4 in this case). - The evaluator uses the `service_context` created earlier, which includes the configured GPT-4 model. This context provides the necessary environment and parameters for the language model to function. - The primary role of the `FaithfulnessEvaluator` is to determine how closely the language model's responses adhere to accurate and reliable information. It uses a set of criteria or algorithms to compare the generated responses against known factual data or expected outputs. The evaluator then checks the response for its faithfulness to factual information. This means it evaluates whether the response accurately and reliably reflects historical facts about the queried topic. The result of this evaluation (`eval_result`) is then checked to see if it meets the standards of accuracy set by the evaluator, indicated by `eval_result.passing`. The result returns a boolean value indicating whether the response passed the accuracy and faithfulness checks.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 176,RAG - Metrics & Evaluation,"# ****Retrieval Evaluation Metrics**** The evaluation of retrieval in RAG systems involves determining the relevance of documents to specific queries. In information retrieval, the main goal is to identify unstructured data that meets a specific information requirement within a database. ![ the evaluation metrics of retrieval in RAG systems.](RAG%20-%20Metrics%20&%20Evaluation%205f19e1a52ec04b1c9f69052c39bc9ddc/image_2023-12-17_064848082.png) the evaluation metrics of retrieval in RAG systems. Metrics for evaluating a retriever include Mean Reciprocal Rank (MRR), Hit Rate, MAP and NDCG. - **[MRR](https://en.wikipedia.org/wiki/Mean_reciprocal_rank)** measures the retrieval system's ability to return the best result as high up in the ranking as possible. - **Hit Rate** evaluates the presence of relevant items within the top results returned, which is crucial where users only consider the first few results. - **[MAP (Mean Average Precision)](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision)**: is a measure of ranking quality across multiple queries. MAP calculates the mean of the average precisions for each query, where the average precision is computed as the mean of the precision scores after each relevant document is retrieved. - **[NDCG (Normalized Discounted Cumulative Gain)](https://en.wikipedia.org/wiki/Discounted_cumulative_gain)**: This metric evaluates the ranking of documents based on their relevance, giving more importance to relevant documents that appear higher in the ranking. It is normalized so that the perfect ranking's score is 1, allowing for comparison across different sets of queries. The `RetrieverEvaluator` is a more advanced technique from LlamaIndex, which can calculate metrics such as Mean Reciprocal Rank (MRR) and Hit Rate. It is designed to evaluate the efficacy of a retrieval system, which retrieves data pertinent to user queries from a database or index. This class assesses the retriever's performance in relation to specific questions and expected results, providing benchmarks for evaluation. Inferred from the evaluator's definition, it's necessary to compile an evaluation dataset comprising the contents, a collection of questions, and corresponding nodes that serve as references for answering these questions. The `generate_question_context_pairs` function in LlamaIndex can take care of generating an evaluation dataset. The rest of the process involves passing a query and using the dataset as a reference to ensure the chatbot retrieves the right documents. Please read the [following tutorial](https://docs.llamaindex.ai/en/stable/examples/evaluation/retrieval/retriever_eval.html) for an example. ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 177,RAG - Metrics & Evaluation,"# **Golden Context Dataset** The [Golden Context dataset](https://github.com/microsoft/promptflow-resource-hub/blob/main/sample_gallery/golden_dataset/copilot-golden-dataset-creation-guidance.md) would consist of carefully selected queries paired with an ideally matched set of sources that contain the answers. Optionally, it could also include the perfect answers that are expected to be generated by the LLM. For our purposes, 177 representative user queries have been manually curated. For each query, the most relevant source within our documentation has been diligently identified so that these sources directly address the queries in question. The Golden Context Dataset serves as our benchmark for precision evaluation. The dataset is structured around 'question' and 'source' pairings. To create a Golden Dataset, gather a set of realistic customer questions and pair them with expert answers, then use this dataset to compare against responses from a language model for quality assurance, ensuring the LLM's answers align closely with the expert ones for accuracy and relevance. If you want to learn more about the dataset's creation, follow [the link in the resources](https://github.com/microsoft/promptflow-resource-hub/blob/main/sample_gallery/golden_dataset/copilot-golden-dataset-creation-guidance.md) section. Once the golden dataset is ready, the next step is to use it to measure the quality of LLM responses. After each evaluation, metrics like the following will be available to quantify the user experience. For example: | Similarity | Relevance | Coherence | Grounded-ness | | --- | --- | --- | --- | | 3.7 | 77 | 88 | 69 | ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 178,RAG - Metrics & Evaluation,"# **Community-Based Evaluation Tools** LlamaIndex incorporates various evaluation tools designed to foster community engagement and collaborative endeavors. The devices are structured to support a shared process of assessing and enhancing the system, empowering users and developers to play an active role in the evaluation. Through the use of tools shaped by community input, LlamaIndex creates a collaborative environment where constant feedback is smoothly incorporated, contributing to continual development. Notable tools in this ecosystem include: - [Ragas](https://github.com/explodinggradients/ragas/blob/main/docs/howtos/integrations/llamaindex.ipynb): Another key tool that provides a framework for evaluating and integrating with LlamaIndex, offering detailed metrics. - [DeepEval](https://docs.llamaindex.ai/en/latest/module_guides/evaluating/root.html#../../../community/integrations/deepeval.md): A tool designed for in-depth evaluation, facilitating comprehensive assessments of various aspects of the system.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 179,RAG - Metrics & Evaluation,"# **Community-Based Evaluation Tools** ## **Evaluating with Ragas** The evaluation process involves importing specific metrics from Ragas, such as *faithfulness, answer relevancy, context precision, context recall, and harmfulness.* When evaluating using Ragas, the following elements are essential: - **Query Engine**: This is the primary component and acts as the core of the evaluation process, where its performance is assessed. - **Metrics**: Ragas provides a range of metrics specifically designed to evaluate a nuanced assessment of the engine's capabilities. - **Questions**: A curated set of questions is required, which are used to probe the engine's ability to retrieve and generate accurate responses. We must first set up a query engine to demonstrate how to use the Ragas library, which involves loading a document. For this, we will use the contents of the ""New York City"" Wikipedia page as our source document. Additionally, installing two more libraries is necessary: one for processing the webpage content and the other for the evaluation library itself. ```bash pip install html2text==2020.1.16 ragas==0.0.22 ``` At this point, we can use the `SimpleWebPageReader` class by providing a URL as its argument to load the content. These documents can then create the index and the query engine. Then, it is possible to ask questions about the document! ```bash from llama_index.readers.web import SimpleWebPageReader from llama_index import VectorStoreIndex, ServiceContext documents = SimpleWebPageReader(html_to_text=True).load_data( [""https://en.wikipedia.org/wiki/New_York_City""] ) vector_index = VectorStoreIndex.from_documents( documents, service_context=ServiceContext.from_defaults(chunk_size=512) ) query_engine = vector_index.as_query_engine() response_vector = query_engine.query(""How did New York City get its name?"") print(response_vector) ``` ``` New York City got its name in honor of the Duke of York, who later became King James II of England. The Duke of York was appointed as the proprietor of the former territory of New Netherland, including the city of New Amsterdam, when England seized it from Dutch control. ``` Returning to our goal of evaluating the models, the next step involves composing a series of questions, ideally derived from the original document, to ensure a more accurate performance assessment. ```python eval_questions = [ ""What is the population of New York City as of 2020?"", ""Which borough of New York City has the highest population?"", ""What is the economic significance of New York City?"", ""How did New York City get its name?"", ""What is the significance of the Statue of Liberty in New York City?"", ] eval_answers = [ ""8,804,000"", # incorrect answer ""Queens"", # incorrect answer ""New York City's economic significance is vast, as it serves as the global financial capital, housing Wall Street and major financial institutions. Its diverse economy spans technology, media, healthcare, education, and more, making it resilient to economic fluctuations. NYC is a hub for international business, attracting global companies, and boasts a large, skilled labor force. Its real estate market, tourism, cultural industries, and educational institutions further fuel its economic prowess. The city's transportation network and global influence amplify its impact on the world stage, solidifying its status as a vital economic player and cultural epicenter."", ""New York City got its name when it came under British control in 1664. King Charles II of England granted the lands to his brother, the Duke of York, who named the city New York in his own honor."", ""The Statue of Liberty in New York City holds great significance as a symbol of the United States and its ideals of liberty and peace. It greeted millions of immigrants who arrived in the U.S. by ship in the late 19th and early 20th centuries, representing hope and freedom for those seeking a better life. It has since become an iconic landmark and a global symbol of cultural diversity and freedom."", ] eval_answers = [[a] for a in eval_answers] ``` This",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 180,RAG - Metrics & Evaluation,"# **Community-Based Evaluation Tools** ## **Evaluating with Ragas** stage is the setup phase of the evaluation process. QueryEngine's proficiency is assessed based on how effectively it processes and responds to these specific questions, utilizing the answers as a standard for measuring performance. We need to import the metrics from the Ragas library. ```python from ragas.metrics import ( faithfulness, answer_relevancy, context_precision, context_recall, ) from ragas.metrics.critique import harmfulness metrics = [ faithfulness, answer_relevancy, context_precision, context_recall, harmfulness, ] ``` The `metrics` list compiles the metrics into a collection, which can then be used in the evaluation process to assess various aspects of the `QueryEngine`'s performance. The results, which include scores for each metric, can be further analyzed. Finally, let's run the evaluation: ```python from ragas.llama_index import evaluate result = evaluate(query_engine, metrics, eval_questions, eval_answers) # print the final scores print(result) ``` ```python evaluating with [faithfulness] 100%|██████████| 1/1 [00:16<00:00, 16.95s/it] evaluating with [answer_relevancy] 100%|██████████| 1/1 [00:03<00:00, 3.54s/it] evaluating with [context_precision] 100%|██████████| 1/1 [00:02<00:00, 2.73s/it] evaluating with [context_recall] 100%|██████████| 1/1 [00:07<00:00, 7.06s/it] evaluating with [harmfulness] 100%|██████████| 1/1 [00:02<00:00, 2.16s/it] {'faithfulness': 0.8000, 'answer_relevancy': 0.7634, 'context_precision': 0.6000, 'context_recall': 0.8667, 'harmfulness': 0.0000} ``` The metrics analysis quantifies different aspects of the RAG system's performance: 1. `faithfulness`: **0.8000** - measures how accurately the system's responses adhere to the factual content of the source material. A score of 0.7 indicates *relatively high* faithfulness, meaning the responses are mostly accurate and true to the source. 2. `answer_relevancy`: **0.7634** - measures how relevant the system's responses are to the given queries. A high score of 0.955 suggests that the majority of the system's responses are *closely aligned* with the queries' intent. 3. `context_precision`: **0.6000** - evaluates the precision of the context used by the system to generate responses. A lower score of 0.2335 indicates that the context used *often includes irrelevant* information. 4. `context_recall`: **0.8667** - measures the recall rate of relevant context determined by the system. A high score of 0.98 suggests that the system is *very effective* in retrieving most of the relevant context. 5. `harmfulness`: **0.0000** - measures the system for harmful or inappropriate content generation. A score of 0 implies that *no harmful content* was generated in the evaluated responses.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 181,RAG - Metrics & Evaluation,"# **Community-Based Evaluation Tools** ## **The Custom RAG Pipeline Evaluation** For a practical evaluation of a custom RAG system, it is essential to employ a range of assessment benchmarks instrumental in assessing various facets of the RAG system, such as its effectiveness and reliability. The variety of measures guarantees a detailed evaluation and in-depth insight into the system's overall capabilities. This section involves developing a tailored evaluation pipeline, beginning with loading a dataset, forming an evaluation dataset from its contents, and then calculating the metrics we have previously discussed. Initially, we download the text file to serve as the dataset. ```bash wget 'https://raw.githubusercontent.com/idontcalculate/data-repo/main/venus_transmission.txt' ``` ``` --2023-12-18 20:21:53-- https://raw.githubusercontent.com/idontcalculate/data-repo/main/venus_transmission.txt Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 19241 (19K) [text/plain] Saving to: ‘venus_transmission.txt’ venus_transmission. 100%[===================>] 18.79K --.-KB/s in 0.001s 2023-12-18 20:21:53 (32.6 MB/s) - ‘venus_transmission.txt’ saved [19241/19241] ``` Then, the text file can be loaded as a `Document` object, identifiable by LlamaIndex. ```python from llama_index import SimpleDirectoryReader reader = SimpleDirectoryReader(input_files=[""/content/venus_transmission.txt""]) docs = reader.load_data() print(f""Loaded {len(docs)} docs"") ``` ``` Loaded 1 docs ``` The `SimpleNodeParser`, in this context, converts documents into a structured format known as nodes and serves for customization in parsing documents, specifically in terms of defining the chunk size, managing overlap, and incorporating metadata. Each chunk of the document is treated as a node. In this case, the parser is set with a `chunk_size` of 512, meaning each node will consist of 512 characters from the original document. These chunks can then be used to generate the indexes. ```python from llama_index.node_parser import SimpleNodeParser from llama_index import VectorStoreIndex # Build index with a chunk_size of 512 node_parser = SimpleNodeParser.from_defaults(chunk_size=512) nodes = node_parser.get_nodes_from_documents(docs) vector_index = VectorStoreIndex(nodes) ``` The indexes can now be used as a query engine to query a specific question concerning the source document. ```python query_engine = vector_index.as_query_engine() response_vector = query_engine.query(""What was The first beings to inhabit the planet?"") print( response_vector.response ) ``` ``` The first beings to inhabit the planet were a dinoid and reptoid race from two different systems outside our solar system. ``` The response generated by the query engine is stored in `response_vector`. So, the document is processed into nodes, indexed, and then queried using a language model. To investigate the response further, we can use the `.source_nodes` key to access the retrieved document from the index used to answer the question. ```python # First retrieved node response_vector.source_nodes[0].get_text() ``` ``` They had heard of this beautiful new planet. At this time, Earth had two moons to harmonize the weather conditions and control the tides of the large bodies of water. The first beings to inhabit the planet were a dinoid and reptoid race from two different systems outside our solar system. They were intelligent and walked on two legs like humans and were war-like considering themselves to be superior to all other life forms. In the past, the four races of humans had conflicts with them before they outgrew such behavior. They arrived on Earth to rob it of its minerals and valuable gems. Soon they had created a terrible war. They were joined by re- 1 enforcements from their home planets. One set up its base on one of the Earth's moons, the other on Earth. It was a terrible war with advanced nuclear and laser weapons like you see in your science fiction movies. It lasted very long. Most of the life forms lay in singed waste and the one moon was destroyed. No longer interested in Earth, they went back to their planets leaving their wounded behind, they had no use for them. The four races sent a few forces",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 182,RAG - Metrics & Evaluation,"# **Community-Based Evaluation Tools** ## **The Custom RAG Pipeline Evaluation** to see if they could help the wounded dinoids and reptilians and to see what they could do to repair the Earth. They soon found that due to the nuclear radiation it was too dangerous on Earth before it was cleared. Even they had to remain so as not to contaminate their own planets. Due to the radiation, the survivors of the dinoids and reptoids mutated into the Dinosaurs and giant reptilians you know of in your history. The humans that were trapped there mutated into what you call Neanderthals. The Earth remained a devastated ruin, covered by a huge dark nuclear cloud and what vegetation was left was being devoured by the giant beings, also humans and animals by some. It was this way for hundreds of years before a giant comet crashed into one of the oceans and created another huge cloud. This created such darkness that the radiating heat of the Sun could not interact with Earth's gravitational field and an ice age was created. This destroyed the mutated life forms and gave the four races the chance to cleanse and heal the Earth with technology and their energy. Once again, they brought various forms of life to the Earth, creating again a paradise, except for extreme weather conditions and extreme tidal activities. ``` We can index the second item on the list to view the content of the second node that contributed to the generation of the content. ```python # Second retrieved node response_vector.source_nodes[1].get_text() ``` ``` Due to the radiation, the survivors of the dinoids and reptoids mutated into the Dinosaurs and giant reptilians you know of in your history. The humans that were trapped there mutated into what you call Neanderthals. The Earth remained a devastated ruin, covered by a huge dark nuclear cloud and what vegetation was left was being devoured by the giant beings, also humans and animals by some. It was this way for hundreds of years before a giant comet crashed into one of the oceans and created another huge cloud. This created such darkness that the radiating heat of the Sun could not interact with Earth's gravitational field and an ice age was created. This destroyed the mutated life forms and gave the four races the chance to cleanse and heal the Earth with technology and their energy. Once again, they brought various forms of life to the Earth, creating again a paradise, except for extreme weather conditions and extreme tidal activities. During this time they realized that their planets were going into a natural dormant stage that they would not be able to support physical life. So they decided to colonize the Earth with their own people. They were concerned about the one moon, because it is creating earthquakes and tidal waves and storms and other difficulties for the structure of the Earth. They knew how to drink fluids to protect and balance themselves. These were the first colonies like Atlantis and Lemuria. The rest of the people stayed on their planets to await their destiny. They knew that they would perish and die. They had made the decision only to bring the younger generation with some spiritual teachers and elders to the Earth. The planet was too small for all of them. But they had no fear of death. They had once again created a paradise. They were instructed to build special temples here as doorways to the other dimensions. Because of the aggressive beings, the temples were hidden for future times when they will be important. There they could do their meditations and the higher beings. They",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 183,RAG - Metrics & Evaluation,"# **Community-Based Evaluation Tools** ## **The Custom RAG Pipeline Evaluation** were informed to build two shields around the Earth out of ice particles to balance the influence of the one moon. They created a tropical climate for the Earth. There were no deserts at that time. They have special crystals for these doorways and they were able to lower their vibration to enter through these doorways. The news spread of the beautiful planet. ``` You can view the textual information from the second node that the query engine found relevant, providing additional context or information in response to the query. This helps understand the breadth of knowledge the query engine pulls from and how different parts of the indexed documents contribute to the overall response. As previously mentioned, our task involves creating an evaluation dataset. This essentially entails generating a series of questions and their respective answers, all in relation to the content we have loaded. The `generate_question_context_pairs` class leverages the LLM to create questions based on the content of each node: For each node, two questions will be created, resulting in a dataset where each item consists of a context (the node's text) and a corresponding set of questions. The Q&A dataset will serve us to evaluate the capabilities of an RAG system in question generation and context understanding tasks. You can see the first ten questions in the output. ```python from llama_index.llms import OpenAI from llama_index.evaluation import generate_question_context_pairs # Define an LLM llm = OpenAI(model=""gpt-3.5-turbo"") qa_dataset = generate_question_context_pairs( nodes, llm=llm, num_questions_per_chunk=2 ) queries = list(qa_dataset.queries.values()) print( queries[0:10] ) ``` ``` 100%|██████████| 13/13 [00:31<00:00, 2.46s/it] ['Explain the role of different alien races in the history of our solar system according to the information provided. How did these races contribute to the transformation process and why was Earth considered a special planet?', 'Describe the advanced abilities and technology possessed by the Masters and beings mentioned in the context. How did their understanding of creation and their eternal nature shape their perspective on life and death?', 'How did the four races of humans demonstrate their mastery of creativity and what were the potential consequences of using this power for selfish reasons?', 'Describe the initial state of Earth before it became a planet and how the four races of humans contributed to its transformation into a unique paradise.', 'How did the arrival of the dinoid and reptoid races on Earth lead to a devastating war? Discuss the reasons behind their conflict with the four races of humans and the impact it had on the planet.', ""Explain the process of mutation that occurred among the survivors of the dinoids and reptoids, resulting in the emergence of dinosaurs and Neanderthals. Discuss the role of nuclear radiation and its effects on the Earth's environment and living organisms."", 'How did the survivors of the dinoids and reptoids mutate into the dinosaurs and giant reptilians we know of in history? Explain the role of radiation in this process.', 'Describe the events that led to the creation of an ice age on Earth. How did this ice age affect the mutated life forms and provide an opportunity for the four races to cleanse and heal the Earth?', 'Explain the purpose and significance of building special temples as doorways to other dimensions in the context of the given information. How did these temples serve the people and protect them from the dark forces?', 'Discuss the actions taken by the colonies in response to the war declared by another race of humans. How did the colonies ensure the preservation of their knowledge and technology, and what measures did they take to protect themselves from the dark forces?', 'How did the",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 184,RAG - Metrics & Evaluation,"# **Community-Based Evaluation Tools** ## **The Custom RAG Pipeline Evaluation** inhabitants of Lemuria and Atlantis ensure that their knowledge and technology would not be misused by the dark forces?', 'What measures were taken by the controlling forces to prevent the people from communicating with other dimensions and remembering their past lives or the hidden temples?', 'How has the manipulation and control of human beings by the rich and powerful impacted society throughout history? Discuss the role of religion, race, and power in perpetuating this control and the potential consequences for humanity.', 'Explain the role of the Galactic Brotherhood and other spiritually evolved beings in the transformation of Earth. How have they worked to change the energy of the planet and its inhabitants? Discuss the potential risks they aim to mitigate, such as genetic manipulation and the use of destructive technologies.', ""Explain the role of the Galactic Brotherhood in the transformation of the planet's energy and the introduction of new technologies. How are different beings, such as the Spiritual Hierarchy, Ascended Masters, and nature spirits, cooperating in this process?"", 'Discuss the significance of the hidden temples and the space ships in the frequency change of the Earth. How do these elements contribute to the gradual transformation and what effects do they have on the environment?', 'Explain the concept of chakras and their role in the transformation process described in the context information. How do chakras relate to the abilities of mental telepathy, intuition, and past life recollection?', ""Discuss the significance of the Earth's future purpose as mentioned in the context information. How does it differ from its past role? How does the concept of yin and yang, as well as the negative and positive energies, tie into this transformation?"", 'How does the concept of division into good and bad energies contribute to the perpetuation of negative forces and selfishness among individuals?', 'Discuss the shift in power dynamics from feminine qualities to male energy in societies after genetic manipulation. How does the future vision of equal and balanced male and female powers impact the purpose of Earth for human beings?', 'How has the balance of feminine and masculine energies shifted throughout human history, and what is the envisioned future for this balance on Earth?', 'In the future described in the context information, how will individuals govern themselves and what role will manmade laws play in society?', 'How does the concept of obeying spiritual laws contribute to living in harmony on other planets for millions of years? Provide examples or evidence from the context information to support your answer.', 'According to the context information, what are some key aspects of the future living style and awareness on Earth after the transformation is complete? How do these aspects differ from the current state of existence?', ""How does the concept of eternity and the ability to overcome time and aging impact one's perspective on life and the enjoyment of experiences?"", 'In what ways can individuals create a balance and harmony within themselves, and why is it important for them to do so?'] ``` The `RetrieverEvaluator` class can now use this QA dataset to evaluate the retriever's performance. It queries each question using the retriever and evaluates which chunks are returned as the answer. The higher MRR and Hit rate numbers represent the retriever’s ability to identify the chunk with the correct answer. ```python from llama_index.evaluation import RetrieverEvaluator retriever = vector_index.as_retriever(similarity_top_k=2) retriever_evaluator = RetrieverEvaluator.from_metric_names( [""mrr"", ""hit_rate""], retriever=retriever ) # Evaluate eval_results = await retriever_evaluator.aevaluate_dataset(qa_dataset) def display_results(name, eval_results): """"""Display results from evaluate."""""" metric_dicts = [] for eval_result in eval_results: metric_dict = eval_result.metric_vals_dict metric_dicts.append(metric_dict) full_df = pd.DataFrame(metric_dicts) hit_rate = full_df[""hit_rate""].mean() mrr = full_df[""mrr""].mean() metric_df = pd.DataFrame( {""Retriever Name"":",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 185,RAG - Metrics & Evaluation,"# **Community-Based Evaluation Tools** ## **The Custom RAG Pipeline Evaluation** [name], ""Hit Rate"": [hit_rate], ""MRR"": [mrr]} ) return metric_df display_results(""OpenAI Embedding Retriever"", eval_results) ``` ```bash | | Retriever Name | Hit Rate | MRR | | ---- | -------------------------- | -------- | --------- | | 0 | OpenAI Embedding Retriever | 0.884615 | 0.730769 | ``` We can now enhance our analysis of the application's performance by including additional metrics like faithfulness and relevancy. To achieve this, we utilize a subset of the generated Q&A dataset and create instances of both GPT-3.5 and GPT-4. It's advisable to employ a more advanced model like GPT-4 for evaluation purposes while using the less expensive model for the generation process. ```python # gpt-3.5-turbo gpt35 = OpenAI(temperature=0, model=""gpt-3.5-turbo"") service_context_gpt35 = ServiceContext.from_defaults(llm=gpt35) # gpt-4 gpt4 = OpenAI(temperature=0, model=""gpt-4"") service_context_gpt4 = ServiceContext.from_defaults(llm=gpt4) vector_index = VectorStoreIndex(nodes, service_context = service_context_gpt35) query_engine = vector_index.as_query_engine() eval_query = queries[10] response_vector = query_engine.query(eval_query) print( ""> eval_query: "", eval_query ) print( ""> response_vector:"", response_vector ) ``` ``` > eval_query: How did the colonies respond to the declaration of war by the dark forces, and what measures did they take to protect their knowledge and technology? > response_vector: The colonies did not fight back against the dark forces when they declared war. Instead, they sent most of their people into hiding in order to rebuild the colonies later. They also destroyed everything to ensure that their knowledge and technology would not fall into the hands of the dark forces. Additionally, Lemuria and Atlantis were destroyed by their inhabitants to prevent the misuse of their knowledge and technology by the dark forces. ``` Now, we can establish the evaluator classes responsible for measuring each metric. We'll then use a sample response to determine if it meets the test criteria. ```python from llama_index.evaluation import RelevancyEvaluator from llama_index.evaluation import FaithfulnessEvaluator relevancy_gpt4 = RelevancyEvaluator(service_context=service_context_gpt4) faithfulness_gpt4 = FaithfulnessEvaluator(service_context=service_context_gpt4) # Compute faithfulness evaluation eval_result = faithfulness_gpt4.evaluate_response(response=response_vector) # check passing parameter in eval_result if it passed the evaluation. print( eval_result.passing ) # Relevancy evaluation eval_result = relevancy_gpt4.evaluate_response( query=eval_query, response=response_vector ) # You can check passing parameter in eval_result if it passed the evaluation. print( eval_result.passing ) ``` ``` True True ``` We must perform a for-loop to feed each sample from the evaluation dataset and get the appropriate results. In this situation, we can use the LlamaIndex `BatchEvalRunner` class, which runs the evaluation process in batches and concurrently. It means the evaluation can be done faster. ```python #Batch Evaluator: #BatchEvalRunner to compute multiple evaluations in batch wise manner. from llama_index.evaluation import BatchEvalRunner # Let's pick top 10 queries to do evaluation batch_eval_queries = queries[:10] # Initiate BatchEvalRunner to compute FaithFulness and Relevancy Evaluation. runner = BatchEvalRunner( {""faithfulness"": faithfulness_gpt4, ""relevancy"": relevancy_gpt4}, workers=8, ) # Compute evaluation eval_results = await runner.aevaluate_queries( query_engine, queries=batch_eval_queries ) # get faithfulness score faithfulness_score = sum(result.passing for result in eval_results['faithfulness']) / len(eval_results['faithfulness']) # get relevancy score relevancy_score = sum(result.passing for result in eval_results['faithfulness']) / len(eval_results['relevancy']) print( ""> faithfulness_score"", faithfulness_score ) print( ""> relevancy_score"", relevancy_score ) ``` ``` > faithfulness_score 1.0 > relevancy_score 1.0 ``` The batch processing method helps in quickly assessing the system’s performance over a range of different queries. A faithfulness score of 1.0 signifies that the generated answers contain no hallucinations and are entirely based on retrieved context. Additionally, the Relevance score of 1.0 suggests that the answers generated consistently align with the retrieved context and the queries.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 186,RAG - Metrics & Evaluation,"# Conclusion This lesson has guided us through constructing and assessing an RAG pipeline using LlamaIndex, concentrating mainly on evaluating the retrieval system and the responses generated within the pipeline. Assessing LLMs and chatbots presents a challenge due to the subjective nature of their outputs. Perceptions of what constitutes a great response can vary significantly from one person to another. Therefore, examining various facets of an RAG application and evaluating each aspect individually based on specific metrics is prudent. --- >> [Notebook](https://colab.research.google.com/drive/1XcCcoUi5ZbUNmTCLYXL51zeD-6mXON5T?usp=sharing). --- ### RESOURCES: - [Response Evaluation](https://docs.llamaindex.ai/en/latest/module_guides/evaluating/usage_pattern.html) - [Retrieval](https://docs.llamaindex.ai/en/latest/module_guides/evaluating/usage_pattern_retrieval.html) Evaluation - **openai-cookbook-eval** [](https://github.com/openai/openai-cookbook/blob/main/examples/evaluation/Evaluate_RAG_with_LlamaIndex.ipynb) - **llamaindex** [Evaluating - LlamaIndex 🦙 0.9.15](https://docs.llamaindex.ai/en/stable/module_guides/evaluating/root.html) - **golden-dataset** [](https://github.com/microsoft/promptflow-resource-hub/blob/main/sample_gallery/golden_dataset/copilot-golden-dataset-creation-guidance.md) - **RAGAS** [](https://github.com/explodinggradients/ragas/blob/main/docs/howtos/integrations/llamaindex.ipynb) - `RagEvaluatorPack` [Downloading a LlamaDataset from LlamaHub - LlamaIndex 🦙 0.9.15.post2](https://docs.llamaindex.ai/en/stable/examples/llama_dataset/downloading_llama_datasets.html)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352340-rag-metrics-evaluation 187,Creating a RAG AI Tutor Bot with TowardsAI & Activ,"# Creating a RAG AI Tutor Bot with TowardsAI & Activeloop [https://lh7-us.googleusercontent.com/CMrM62nvjTbLm41Dq5ILsc9PhkQgm7QyKmnpMinLQeN7kh7kUQy4Z5J6oC2cV-_Vt3JwXUmgoBbD2va6UqhPw5ZkFub75tc6awr56kqbP_0ztvpUq0GVLqjUz-CJ8K0q9SwAwK6pWE3_b0wdnHOGMQU](https://lh7-us.googleusercontent.com/CMrM62nvjTbLm41Dq5ILsc9PhkQgm7QyKmnpMinLQeN7kh7kUQy4Z5J6oC2cV-_Vt3JwXUmgoBbD2va6UqhPw5ZkFub75tc6awr56kqbP_0ztvpUq0GVLqjUz-CJ8K0q9SwAwK6pWE3_b0wdnHOGMQU) **Abstract** This technical report describes the development and deployment of a Retrieval Augmented Generation (RAG) based AI Tutor to provide scalable, personalized support for students enrolled in the Gen AI 360 online courses. The solution leverages Activeloop’s database for AI, 4th Gen Intel® Xeon® processors, and Intel® oneAPI Math Kernel Library (oneMKL). The RAG AI Tutor project showcases the capabilities of Intel's 4th Gen Xeon® processors and oneAPI Math Kernel Library (oneMKL) in driving high-performance computing for AI applications. Intel's technology enables real-time AI interactions and efficient cosine similarity computations, crucial for the RAG system's embedding retrieval, demonstrating Intel® hardware and software's synergy in advanced AI tasks.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51335034-introducing-the-towards-ai-tutor-bot-powered-by-activeloop-and-intel 188,Creating a RAG AI Tutor Bot with TowardsAI & Activ,"# Creating a RAG AI Tutor Bot with TowardsAI & Activeloop ## **Introduction** The Gen AI 360, facilitated by Activeloop, Towards AI, & Intel Disruptor Initiative, faces challenges in scaling student support for its extensive online courses. A RAG-based AI Tutor has been developed to augment human tutor efforts by providing immediate, accurate responses to student inquiries.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51335034-introducing-the-towards-ai-tutor-bot-powered-by-activeloop-and-intel 189,Creating a RAG AI Tutor Bot with TowardsAI & Activ,"# Creating a RAG AI Tutor Bot with TowardsAI & Activeloop ## **Problem Statement** Scalability issues arise when providing support for thousands of students. There is a need for an AI-based solution that can deliver individualized support for routine queries while ensuring knowledge relevance accuracy and reducing hallucinations.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51335034-introducing-the-towards-ai-tutor-bot-powered-by-activeloop-and-intel 190,Creating a RAG AI Tutor Bot with TowardsAI & Activ,"# Creating a RAG AI Tutor Bot with TowardsAI & Activeloop ## **The RAG AI Tutor Solution** A partnership between Towards AI, Activeloop, and Intel has resulted in an RAG AI Tutor that supports tens of thousands of students. ### **Knowledge Base** - Content: Access to a library of ~4,000 AI tutorials and articles, including updated information on LLMs and articles from the GenAI360 course. - Storage: 21,000 chunks of data (~500 words each) transformed into embedding vectors. ### **Relevancy, Accuracy, and Hallucinations** - Technique: Utilizes RAG and Deep Memory by Activeloop. - Implementation: Strictly answer questions from the embedded knowledge base to reduce misinformation. ### **Cost, Latency, and Scalability** - Hardware: Utilization of 4th Gen Intel® Xeon® processors. - Optimization: Balancing the number of sources and data chunks to enhance speed and reduce costs.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51335034-introducing-the-towards-ai-tutor-bot-powered-by-activeloop-and-intel 191,Creating a RAG AI Tutor Bot with TowardsAI & Activ,"# Creating a RAG AI Tutor Bot with TowardsAI & Activeloop ## **Technology Stack** - LLMs: OpenAI’s ada-002 embeddings and GPT-Turbo 16k LLM. - Intel Inside: 4th Gen Intel® Xeon® processors for low-latency LLM inference with Intel® AVX-512, Intel® oneAPI Math Kernel Library (oneMKL). - Data Storage: Activeloop Deep Lake is a database for AI that hosts embeddings and metadata and enables deep memory for better retrieval.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51335034-introducing-the-towards-ai-tutor-bot-powered-by-activeloop-and-intel 192,Creating a RAG AI Tutor Bot with TowardsAI & Activ,"# Creating a RAG AI Tutor Bot with TowardsAI & Activeloop ## **Technical Achievements** - Compute Efficiency: A 22.93% increase in cosine similarity computations was achieved with 4th Gen Intel® Xeon® compared to 3rd Gen Xeon® Processors. - Accuracy Improvement: The implementation of Deep Memory noted a 20% increase in recall@10 for embedding retrieval. [https://lh7-us.googleusercontent.com/JgQAdnzCc94GLaADelYAxDkJ-K3SBG_vYq-J9OjosIFwtFHbBht4pdT5LSFBVGGtIFa8zz2YSE1zrB9rvn2dBDBPtpUBMEPbqTSSJCAM-oKGgHxkzJ-6XnLp06MZERn_C-J9l2pJDUGiKWhKnpLaflk](https://lh7-us.googleusercontent.com/JgQAdnzCc94GLaADelYAxDkJ-K3SBG_vYq-J9OjosIFwtFHbBht4pdT5LSFBVGGtIFa8zz2YSE1zrB9rvn2dBDBPtpUBMEPbqTSSJCAM-oKGgHxkzJ-6XnLp06MZERn_C-J9l2pJDUGiKWhKnpLaflk)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51335034-introducing-the-towards-ai-tutor-bot-powered-by-activeloop-and-intel 193,Creating a RAG AI Tutor Bot with TowardsAI & Activ,"# Creating a RAG AI Tutor Bot with TowardsAI & Activeloop ## **Implementation in Educational Context** The RAG AI Tutor is integrated with Gen AI 360 course offerings, providing in-depth lessons and tutorials, practical coding projects, real-time AI assistance for technical Q&A, and access to updated AI-related content and community support.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51335034-introducing-the-towards-ai-tutor-bot-powered-by-activeloop-and-intel 194,Creating a RAG AI Tutor Bot with TowardsAI & Activ,"# Creating a RAG AI Tutor Bot with TowardsAI & Activeloop ## **Product Outcome** - Response Time: An AI Tutor delivers answers with a 0.0243-second response time. - Accuracy: The utilization of pre-filtered knowledge sources and the Deep Memory by Activeloop dramatically reduces the risk of hallucinations. - Efficiency: Demonstrated speed improvements and accurate query handling.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51335034-introducing-the-towards-ai-tutor-bot-powered-by-activeloop-and-intel 195,Creating a RAG AI Tutor Bot with TowardsAI & Activ,"# Creating a RAG AI Tutor Bot with TowardsAI & Activeloop ## **Future Steps** - Deployment: Introduction of a Discord bot for real-time student interaction. - Optimization: Continual refinement of the data sources for the RAG model. - Experimentation: Potential fine-tuning of open-source LLMs and new embedding models.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51335034-introducing-the-towards-ai-tutor-bot-powered-by-activeloop-and-intel 196,Creating a RAG AI Tutor Bot with TowardsAI & Activ,"# Creating a RAG AI Tutor Bot with TowardsAI & Activeloop ## **Key Learnings** - Integrated AI solutions are vital for scalable student support in online education. - The selection of hardware and database management systems is crucial for the performance and cost-effectiveness of AI applications.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51335034-introducing-the-towards-ai-tutor-bot-powered-by-activeloop-and-intel 197,Creating a RAG AI Tutor Bot with TowardsAI & Activ,"# Creating a RAG AI Tutor Bot with TowardsAI & Activeloop ## **Conclusion** The collaborative development of the RAG AI Tutor represents an advancement in AI-assisted education, focusing on scalability, accuracy, and speed. Implementing efficient hardware and vector databases is critical to the success of AI applications in education. **Disclaimers** Performance varies by use, configuration, and other factors. Learn more on the/Performance Index site. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary. For workloads and configurations, visit 4th Gen Xeon® Scalable processors at www.intel.com/processorclaims. Results may vary. Intel technologies may require enabled hardware, software or service activation. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. Intel® technologies may require enabled hardware, software, or service activation. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51335034-introducing-the-towards-ai-tutor-bot-powered-by-activeloop-and-intel 198,LangSmith Introduction,"# LangSmith Introduction In this lesson, we will recap LangChain components, review its essential concepts, and discuss how to use the newly introduced LangSmith platform. Additionally, we will create a basic Large Language Model application to understand its capabilities better.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352346-langsmith-introduction 199,LangSmith Introduction,"# LangChain Recap LangChain is a specialized framework designed for building LLM-powered applications. It streamlines the development of intelligent, responsive LLMs and Libraries for handling chains and agents with integrated components. It also offers [Templates](https://python.langchain.com/docs/templates) for deployable task-specific architectures and [LangSmith](https://python.langchain.com/docs/langsmith) for debugging in a testing environment. The key features of LangChain, like Models, Vector Stores, Chains, etc., have been explained in detail in the previous lesson. ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352346-langsmith-introduction 200,LangSmith Introduction,"# **LangChain Hub** LangChain Hub is a centralized repository for community-sourced prompts tailored to various use cases like classification or summarization. The Hub supports both public contributions and private organizational use, fostering a collaborative development environment. The platform's version control system enables users to track prompt modifications and maintain consistency across applications. The Hub offers features like **Prompt Exploration**, ideal for fresh interactions with language models or specific prompts to achieve particular objectives. It also simplifies the process of finding and utilizing effective prompts for various models. Additionally, the user can share, modify, and track prompt versions with **Prompt Versioning**. It allows for the easy management of different versions of prompts, a highly relevant feature in real-world projects where reverting to earlier versions may be necessary. The user-friendly interface of the Hub allows for prompt testing, customization, and iteration in a playground environment. ",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352346-langsmith-introduction 201,LangSmith Introduction,"# LangSmith [LangSmith](https://www.langchain.com/langsmith) provides an environment for evaluating and monitoring the quality of LLM outputs. An integral part of its functionality includes metadata monitoring, token usage, and execution time, which are crucial for resource management. This platform facilitates the refinement of new chains and tools, potentially enhancing their efficiency and performance. Users can create diverse testing environments tailored to specific needs, enabling thorough evaluation under various conditions. Additionally, the service provides visualization tools that can aid in identifying response patterns and trends, thereby supporting a deeper understanding and assessment of performance. Lastly, the platform supports tracing the runs associated with an active instance and testing and evaluating any prompts or answers generated. LangSmith is designed with user-friendliness in mind. The platform offers a range of tutorials and documentation to help you get started. The setup for LangChain requires installing the necessary libraries and configuring the required environment variables, which we will cover in the following section. For certain functionalities like tracing, you need to have a LangSmith account. Please follow the steps outlined below to set up a new account. - Head over to the [LangSmith](https://www.langchain.com/langsmith) website and sign up for an account. You can use various supported login methods. - Once your account is set up, go to the settings page. Here, you'll find the option to create an API key. - Click the 'Generate API Key' button to receive your API key.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352346-langsmith-introduction 202,LangSmith Introduction,"# LangSmith ## Versioning You can commit a prompt after implementing and debugging your chain. Add this prompt under your handle's namespace to view it in the Hub. ```python from langchain import hub from langchain.prompts.chat import ChatPromptTemplate prompt = ChatPromptTemplate.from_template(""tell me a joke about {topic}"") handle = """" hub.push(f""{handle}/rag"", prompt) ``` During evaluation, if you come up with a better idea after trying the prompt, you can push the updated prompt to the same key to ""commit"" a new version of the prompt. For instance, let's add a system message to the prompt. ```python # You may try making other changes and saving them in a new commit. from langchain import schema prompt.messages.insert(0, schema.SystemMessage( content=""You are a precise, autoregressive question-answering system."" ) ) ``` With the saved changes, we can analyze how the change reflects the model performance. The newest version of the prompt is saved as the latest version. ```python # Pushing to the same prompt ""repo"" will create a new commit hub.push(f""{handle}/rag-prompt"", prompt) ```",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352346-langsmith-introduction 203,LangSmith Introduction,"# LangSmith ## Tracing LangSmith allows users to review the inputs and outputs of each element in the chain by simplifying the process of logging runs for your LLM applications. This feature is useful when debugging your application or understanding the behavior of specific components. The following section will explore the optional environment variables that enable the tracing feature. For more information, you can [visit the documentation](https://docs.smith.langchain.com/tracing#log-runs).",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352346-langsmith-introduction 204,LangSmith Introduction,"# LangSmith ## **Serving (LangServe)** [LangServe](https://www.langchain.com/langserve) helps developers deploy LangChain-powered applications and chains as a REST API. It is integrated with FastAPI, which makes the process of creating API endpoints easy and accessible. It is possible to quickly deploy applications by using the `langserve` package. The deployment process is out of the scope of this course. However, you can learn more about the process from the [Github repository](https://github.com/langchain-ai/langserve?ref=blog.langchain.dev).",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352346-langsmith-introduction 205,LangSmith Introduction,"# QuestionAnswering Chain & LangChain Hub The next steps are loading data from a webpage, splitting it into smaller chunks, transforming them into embeddings, storing them on the Deep Lake vector store, and utilizing the prompt templates from the [LangSmith Hub](https://smith.langchain.com/hub). Before exploring the code, installing the essential libraries from the Python package (pip) manager is necessary. ```bash pip install -q langchain==0.0.346 openai==1.3.7 tiktoken==0.5.2 cohere==4.37 deeplake==3.8.11 langchainhub==0.1.14 ``` The next step is to set the API keys in the environment for OpenAI, utilized in the embedding generation process, and the Activeloop key, required for storing data in the cloud. ```python import os os.environ[""OPENAI_API_KEY""] = """" os.environ[""ACTIVELOOP_TOKEN""] = """" ``` You can optionally use the following environment variables to keep track of the runs in the LangSmith dashboard under the projects section. ```python os.environ[""LANGCHAIN_TRACING_V2""]=True os.environ[""LANGCHAIN_ENDPOINT""]=""https://api.smith.langchain.com"" os.environ[""LANGCHAIN_API_KEY""]="""" os.environ[""LANGCHAIN_PROJECT""]=""langsmith-intro"" # if not specified, defaults to ""default"" ``` Now, we can read the content of a webpage using the `WebBaseLoader` class. It will return a single instance of the `Document` class containing all the textual information from the mentioned address. Subsequently, the lengthy text is divided into smaller segments of 500 characters each, with no overlap, resulting in 130 chunks. ```python from langchain.document_loaders import WebBaseLoader from langchain.text_splitter import RecursiveCharacterTextSplitter # Loading loader = WebBaseLoader(""https://lilianweng.github.io/posts/2023-06-23-agent/"") data = loader.load() print(len(data)) # Split text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0) all_splits = text_splitter.split_documents(data) print(len(all_splits)) ``` ``` 1 130 ``` Chunks can be saved to the Deep Lake vector store through LangChain integration. The DeepLake class handles converting texts into embeddings via OpenAI's API and then stores these results in the cloud. The dataset can be loaded from the GenAI360 course organization, or you can use your organization name (which defaults to your username) to create the dataset. Note that this task incurs the associated costs of using OpenAI endpoints. ```python from langchain.embeddings import OpenAIEmbeddings from langchain.vectorstores import DeepLake vectorstore = DeepLake.from_documents( all_splits, dataset_path=""hub://genai360/langsmith_intro"", embedding=OpenAIEmbeddings(), overwrite=False) ``` ```python Your Deep Lake dataset has been successfully created! Creating 130 embeddings in 1 batches of size 130:: 100%|██████████| 1/1 [00:05<00:00, 5.81s/it] dataset (path='hub://genai360/langsmith_intro', tensors=['text', 'metadata', 'embedding', 'id']) tensor htype shape dtype compression ------- ------- ------- ------- ------- text text (130, 1) str None metadata json (130, 1) str None embedding embedding (130, 1536) float32 None id text (130, 1) str None ``` Once the data is processed, we can retrieve a prompt from the LangChain hub, which provides a `ChatPromptTemplate` instance. This eliminates the need for designing a prompt through trial and error, allowing us to build upon already tested implementations. The following code tagged a specific prompt version so future changes would not impact the active deployment version. ```python from langchain import hub prompt = hub.pull(""rlm/rag-prompt:50442af1"") print(prompt) ``` ``` ChatPromptTemplate(input_variables=['context', 'question'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template=""You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\nQuestion: {question} \nContext: {context} \nAnswer:""))]) ``` Finally, we can employ the `RetrievalQA` chain to fetch related documents from the database and utilize the `ChatOpenAI` model to use these documents to generate our final response. ```python # LLM from langchain.chains import RetrievalQA from langchain.chat_models import ChatOpenAI llm = ChatOpenAI(model_name=""gpt-3.5-turbo"", temperature=0) # RetrievalQA qa_chain = RetrievalQA.from_chain_type( llm, retriever=vectorstore.as_retriever(), chain_type_kwargs={""prompt"": prompt} ) question = ""What are the approaches to Task Decomposition?"" result = qa_chain({""query"": question}) result[""result""] ``` ``` The approaches to task decomposition include using LLM with simple prompting, task-specific instructions, and human inputs. ``` Prompt versioning supports ongoing experimentation and collaboration, effectively preventing the accidental deployment of chain components that haven't been sufficiently validated.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352346-langsmith-introduction 206,LangSmith Introduction,"# Conclusion In this lesson, we discussed how to use the LangChain [Hub](https://colab.research.google.com/corgiredirector?site=https%3A%2F%2Fsmith.langchain.com%2Fhub%3Fpage%3D1) to store and share prompts for a retrieval QA chain. The Hub is a centralized location to manage, version, and share prompts. LangSmith excels in diagnosing errors, comparing prompt effectiveness, assessing output quality, and tracking key metadata like token usage and execution time for optimizing LLM applications. The platform also provides a detailed analysis of how different prompts affected the LLM performance. The intuitive UI and the valuable insights the platform offers make the iterative process of refining LLMs more transparent and manageable. It's evident that the LangSmith platform, even in its beta phase, has the potential to be a significant tool for developers aiming to leverage the full potential of LLMs. LangSmith provides an immediate functionality to sift through your runs and presents metrics. These metrics are essential for quickly assessing latency and the total token count throughout your application. >> [Notebook](https://colab.research.google.com/drive/1scwUUFOqgjqCE0CRFe19Llv86GcBt7Sh?usp=sharing). --- ### **RESOURCES**: - hub-examples: LangSmith cookbook [](https://github.com/langchain-ai/langsmith-cookbook/blob/main/hub-examples/retrieval-qa-chain/retrieval-qa.ipynb) - the art of LangSmith article [The Art of LangSmithing](https://betterprogramming.pub/the-art-of-langsmithing-42dcd191a220)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51352346-langsmith-introduction 207,Fine-tuning vs RAG; Introduction to Activeloop’s D,"# Introduction In this lesson, we will explore optimization techniques that maximize large language model performance. We will learn about the appropriate use of prompt engineering, retrieval augmented generation (RAG), and fine-tuning, distinguishing how each method contributes and their specific challenges. A significant portion of the lesson will be dedicated to addressing the limitations of RAG systems in real-world applications. These mainly include maintaining high retrieval accuracy and ensuring accurate responses from LLMs. Much of our discussion will include [Activeloop's Deep Memory](https://docs.activeloop.ai/performance-features/deep-memory), a technique designed to augment the retrieval precision of embeddings for user queries. We will also perform a detailed comparison of empirical data, analyzing the differences in retrieval recall rates between systems employing Deep Memory and those that do not.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320344-when-to-do-what-using-llms-rag-vs-fine-tuning-vs-deep-memory-vs-training-llm-from-scratch 208,Fine-tuning vs RAG; Introduction to Activeloop’s D,"# Overview of RAG E**nhancement Techniques** Expanding on the discussion surrounding fine-tuning, retrieval-augmented generation, and prompt engineering, it's essential to understand each approach's distinct strengths, weaknesses, and most suitable applications.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320344-when-to-do-what-using-llms-rag-vs-fine-tuning-vs-deep-memory-vs-training-llm-from-scratch 209,Fine-tuning vs RAG; Introduction to Activeloop’s D,"# Overview of RAG E**nhancement Techniques** ## Prompt engineering Prompt engineering is often the first step in enhancing the performance of an LLM for specific tasks. This approach alone can be sufficient, especially for simpler or well-defined tasks. Techniques like [few-shot prompting](https://www.promptingguide.ai/techniques/fewshot) can notably improve task performance. This method involves providing small task-specific examples to guide the LLM. [Chain of Thought (CoT)](https://arxiv.org/abs/2201.11903) prompting can also improve reasoning capabilities and encourage the model to generate more detailed responses. Combining Few-shot with RAG—using a tailored dataset of examples to retrieve the most relevant information for each query—can be more effective.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320344-when-to-do-what-using-llms-rag-vs-fine-tuning-vs-deep-memory-vs-training-llm-from-scratch 210,Fine-tuning vs RAG; Introduction to Activeloop’s D,"# Overview of RAG E**nhancement Techniques** ## **Fine-tuning** Fine-tuning enhances LLM’s capabilities in the following areas: 1. Modifying the **structure** or **tone** of responses. 2. Teaching the model to follow complex instructions. For example, fine-tuning enables models to perform tasks like extracting JSON-formatted data from text, translating natural language into SQL queries, or adopting a specific writing style. Fine-tuning demands a large, high-quality, task-specific dataset for effective training. You can start with a small dataset and training to see if the method works for your task. Fine-tuning is less effective in adapting to new, rapidly changing data or unfamiliar queries beyond the training dataset. It's also not the best choice for incorporating new information into the model. Alternative methods, such as Retrieval-Augmented Generation, are more suitable. ![From [A Survey of Techniques for Maximizing LLM Performance](https://www.youtube.com/watch?v=ahnGLM-RC1Y&t=344s)](Fine-tuning%20vs%20RAG;%20Introduction%20to%20Activeloop%E2%80%99s%20D%20b072ea133ad2469693dcd5e53fd4c87c/Fine-tuning%20vs%20RAG;%20Introduction%20to%20Activeloop%E2%80%99s%20D%20fc5328546cb14262a1afada413848a28/Untitled.png) From [A Survey of Techniques for Maximizing LLM Performance](https://www.youtube.com/watch?v=ahnGLM-RC1Y&t=344s)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320344-when-to-do-what-using-llms-rag-vs-fine-tuning-vs-deep-memory-vs-training-llm-from-scratch 211,Fine-tuning vs RAG; Introduction to Activeloop’s D,"# Overview of RAG E**nhancement Techniques** ## **Retrieval-Augmented Generation** RAG specializes in incorporating **external knowledge**, enabling the model to access current and varied information. **Real-Time Updates**: It is more adept at dealing with evolving datasets and can provide more up-to-date responses. **Complexity in Integration**: Setting up a RAG system is more complex than basic prompting, requiring extra components like a Vector Database and retrieval algorithms. **Data Management**: Managing and updating the external data sources is crucial for maintaining the accuracy and relevance of its outputs. **Retrieval accuracy:** Ensuring precise embedding retrieval is crucial in RAG systems to guarantee reliable and comprehensive responses to user queries. For that, we will demonstrate how Activeloop’s Deep Memory method can greatly increase the recall of embedding retrieval.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320344-when-to-do-what-using-llms-rag-vs-fine-tuning-vs-deep-memory-vs-training-llm-from-scratch 212,Fine-tuning vs RAG; Introduction to Activeloop’s D,"# Overview of RAG E**nhancement Techniques** ## RAG + Fine-tuning Fine-tuning and RAGs are not mutually exclusive techniques. Fine-tuning brings the advantage of customizing models for a specific style or format, which can be useful when using LLMs for specific domains such as medical, financial, or legal, requiring a highly specialized tone of writing. When combined with RAG, the model becomes adept in its specialized area and gains access to a vast range of external information. The resulting model provides accurate responses in the niche area. Implementing these two methods can demand considerable resources for setup and ongoing upkeep. It involves multiple training runs of fine-tuning with the data handling requirements inherent to RAG. ![From [A Survey of Techniques for Maximizing LLM Performance](https://www.youtube.com/watch?v=ahnGLM-RC1Y&t=344s) ](Fine-tuning%20vs%20RAG;%20Introduction%20to%20Activeloop%E2%80%99s%20D%20b072ea133ad2469693dcd5e53fd4c87c/Fine-tuning%20vs%20RAG;%20Introduction%20to%20Activeloop%E2%80%99s%20D%20fc5328546cb14262a1afada413848a28/Screenshot_2023-12-13_at_3.43.57_PM.png) From [A Survey of Techniques for Maximizing LLM Performance](https://www.youtube.com/watch?v=ahnGLM-RC1Y&t=344s) [data:image/svg+xml;charset=utf-8,%3Csvg height='1254' width='2398' xmlns='http://www.w3.org/2000/svg' version='1.1'%3E%3C/svg%3E](data:image/svg+xml;charset=utf-8,%3Csvg height='1254' width='2398' xmlns='http://www.w3.org/2000/svg' version='1.1'%3E%3C/svg%3E)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320344-when-to-do-what-using-llms-rag-vs-fine-tuning-vs-deep-memory-vs-training-llm-from-scratch 213,Fine-tuning vs RAG; Introduction to Activeloop’s D,"# Enhanced RAG with Deep Memory [Deep Memory](https://docs.activeloop.ai/performance-features/deep-memory) is a method developed by Activeloop to boost the accuracy of embedding retrieval for RAG systems integrated into the Deep Lake vector store database. Central to its functionality is an embedding transformation process. Deep Memory trains a model that transforms embeddings into a space optimized for your use case. This reconfiguration significantly improves vector search accuracy. Deep Memory is effective where query reformulation, query transformation, or document re-ranking might cause latency and increased token usage. It boosts retrieval capabilities without negatively impacting the system's performance. The figure below shows the recall performance for different algorithms compared to Deep Memory. **Recall@1**: This measures whether the top result (i.e., the first result) returned by the retrieval system is relevant to the query. **Recall@10**: This metric assesses whether the relevant document is within the top 10 results returned by the retrieval system. ![From [Activeloop’s Deep Memory documentation](https://docs.activeloop.ai/performance-features/deep-memory)](Fine-tuning%20vs%20RAG;%20Introduction%20to%20Activeloop%E2%80%99s%20D%20b072ea133ad2469693dcd5e53fd4c87c/Fine-tuning%20vs%20RAG;%20Introduction%20to%20Activeloop%E2%80%99s%20D%20fc5328546cb14262a1afada413848a28/Untitled%201.png) From [Activeloop’s Deep Memory documentation](https://docs.activeloop.ai/performance-features/deep-memory) **Comparison to Lexical search** [BM25](https://en.wikipedia.org/wiki/Okapi_BM25) is considered a state-of-the-art approach for ""lexical search,"" based on the explicit presence of words (or lexicons) from the query in the documents. It's particularly effective for applications where the relevance of documents depends heavily on the presence of specific terms, such as in traditional search engines. However, BM25 does not account for the semantic relationships between words, where more advanced techniques like vector search with neural embeddings and semantic search come into play.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320344-when-to-do-what-using-llms-rag-vs-fine-tuning-vs-deep-memory-vs-training-llm-from-scratch 214,Fine-tuning vs RAG; Introduction to Activeloop’s D,"# Enhanced RAG with Deep Memory ## Overview of Deep Memory ![[From Activeloop Docs - How Deep Memory Works](https://docs.activeloop.ai/performance-features/deep-memory/how-it-works)](Fine-tuning%20vs%20RAG;%20Introduction%20to%20Activeloop%E2%80%99s%20D%20b072ea133ad2469693dcd5e53fd4c87c/Fine-tuning%20vs%20RAG;%20Introduction%20to%20Activeloop%E2%80%99s%20D%20fc5328546cb14262a1afada413848a28/deepmemory_diagram.png) [From Activeloop Docs - How Deep Memory Works](https://docs.activeloop.ai/performance-features/deep-memory/how-it-works) In the figure above, we see the Inference and Training workflow: 1. **Embeddings**: Vector representation of a text sentence or set of words. We can create them using embedding models such as OpenAI’s [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings/embedding-models) or open-source models. 2. **Deep Memory Training**: A dataset of **query and context pairs** trains the Deep Memory model. This training process runs on the Deep Lake service, which provides the computational resources and infrastructure for handling the training. 3. **Deep Memory Inference**: The model enters the inference phase after training, which transforms query embeddings. We can use the [Tensor Query Language (TQL)](https://docs.activeloop.ai/performance-features/querying-datasets) when running an inference/querying in the Vector Store. 4. **Transformed Embeddings**: The result of the inference process is a set of transformed embeddings optimized for a specific use case. This optimization means that the embeddings are now in a more conducive space for returning accurate results. 5. **Vector Search**: These optimized embeddings are used in a vector search, utilizing standard similarity search techniques (e.g., cosine similarity). The vector search is retrieving information, leveraging the refined embeddings to find and retrieve the most relevant data points for a given query.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320344-when-to-do-what-using-llms-rag-vs-fine-tuning-vs-deep-memory-vs-training-llm-from-scratch 215,Fine-tuning vs RAG; Introduction to Activeloop’s D,"# Enhanced RAG with Deep Memory ## Step by Step - Training a **Deep Memory Model** Moving forward in our lesson, let's implement Deep Memory within our experimental workflow to see firsthand how it impacts retrieval recall. You can follow along with this [Colab notebook.](https://colab.research.google.com/drive/1dgNwwCuA4zqtQDRuyzqd53PbG3PlRHul#scrollTo=lXJYCiwNPbOL) 1. Install the required libraries ```python !pip3 install deeplake langchain openai tiktoken llama-index ``` 1. Set your ACTIVELOOP_TOKEN and OPENAI_API_KEY ```python import os, getpass os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass() ``` ```python os.environ['OPENAI_API_KEY'] = getpass.getpass() ``` 1. Download the data or use your own. Here, we download a [text file](https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt) hosted on GitHub. ```python !mkdir -p 'data/paul_graham/' !curl 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -o 'data/paul_graham/paul_graham_essay.txt' ``` 1. Create the Llama-index nodes/chunks ```python from llama_index.node_parser import SimpleNodeParser from llama_index import SimpleDirectoryReader documents = SimpleDirectoryReader(""./data/paul_graham/"").load_data() node_parser = SimpleNodeParser.from_defaults(chunk_size=512) nodes = node_parser.get_nodes_from_documents(documents) # By default, the node/chunks ids are set to random uuids. To ensure same id's per run, we manually set them. for idx, node in enumerate(nodes): node.id_ = f""node_{idx}"" print(f""Number of Documents: {len(documents)}"") print(f""Number of nodes: {len(nodes)} with the current chunk size of {node_parser.chunk_size}"") ``` ``` Number of Documents: 1 Number of nodes: 58 with the current chunk size of 512 ``` 1. Create a local Deep Lake vector store ```python from llama_index import VectorStoreIndex, ServiceContext, StorageContext from llama_index.vector_stores import DeepLakeVectorStore from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.llms import OpenAI # Create a DeepLakeVectorStore locally to store the vectors dataset_path = ""./data/paul_graham/deep_lake_db"" vector_store = DeepLakeVectorStore(dataset_path=dataset_path, overwrite=True, exec_option=""compute_engine"") # LLM that will answer questions with the retrieved context llm = OpenAI(model=""gpt-3.5-turbo-1106"") embed_model = OpenAIEmbedding() service_context = ServiceContext.from_defaults(embed_model=embed_model, llm=llm,) storage_context = StorageContext.from_defaults(vector_store=vector_store) vector_index = VectorStoreIndex(nodes, service_context=service_context, storage_context=storage_context, show_progress=True) ``` ``` Uploading data to deeplake dataset. 100%|██████████| 58/58 [00:00<00:00, 274.94it/s]Dataset(path='./data/paul_graham/deep_lake_db', tensors=['text', 'metadata', 'embedding', 'id']) tensor htype shape dtype compression ------- ------- ------- ------- ------- text text (58, 1) str None metadata json (58, 1) str None embedding embedding (58, 1536) float32 None id text (58, 1) str None ``` 1. Now, let's upload the local Vectore Store to Activeloop's platform and convert it into a managed database. ```python import deeplake local = ""./data/paul_graham/deep_lake_db"" hub_path = ""hub://genai360/LlamaIndex_paulgraham_essay"" hub_managed_path = ""hub://genai360/LlamaIndex_paulgraham_essay_managed"" # First upload our local vector store deeplake.deepcopy(local, hub_path, overwrite=True) # Create a managed vector store under a different name deeplake.deepcopy(hub_path, hub_managed_path, overwrite=True, runtime={""tensor_db"": True}) ``` 1. Instantiate a Vector Store with the managed dataset that we just created. ```python db = DeepLakeVectorStore(dataset_path=hub_managed_path, overwrite=False, exec_option=""compute_engine"", read_only=True,) ``` Now, let’s generate a dataset of Queries and Documents 1. Fetching our `docs` and `ids` from the vector store. ```python # Fetch dataset docs and ids docs = db.vectorstore.dataset.text.data(fetch_chunks=True, aslist=True)['value'] ids = db.vectorstore.dataset.id.data(fetch_chunks=True, aslist=True)['value'] print(len(docs)) ``` 1. Generating a synthetic training dataset. We need labeled data (`query` and `document_id` pairs) to train a Deep Memory model. Sometimes, it can be difficult to get labeled data when you are starting from scratch. This tutorial generates queries/questions using `gpt-3.5-turbo` from our existing documents. ```python from openai import OpenAI client = OpenAI() def generate_question(text): try: response = client.chat.completions.create( model=""gpt-3.5-turbo-1106"", messages=[ {""role"": ""system"", ""content"": ""You are a world class expert for generating questions based on provided context. \ You make sure the question can be answered by the text.""}, { ""role"": ""user"", ""content"": text, }, ], ) return response.choices[0].message.content except: question_string = ""No question generated"" return question_string ``` ```python import random from tqdm import tqdm def generate_queries(docs: list[str], ids: list[str], n: int): questions = [] relevances = [] pbar = tqdm(total=n) while len(questions) < n: # 1. randomly draw a piece of text and relevance id r = random.randint(0, len(docs)-1) text, label = docs[r], ids[r] # 2. generate queries and assign and relevance id generated_qs = [generate_question(text)] if generated_qs == [""No question generated""]: print(""No question generated"") continue questions.extend(generated_qs) relevances.extend([[(label, 1)] for _ in generated_qs]) pbar.update(len(generated_qs)) return questions[:n], relevances[:n] ``` 5.1 Launch",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320344-when-to-do-what-using-llms-rag-vs-fine-tuning-vs-deep-memory-vs-training-llm-from-scratch 216,Fine-tuning vs RAG; Introduction to Activeloop’s D,"# Enhanced RAG with Deep Memory ## Step by Step - Training a **Deep Memory Model** the query generation process with a desired size of 40 queries/questions. ```python questions, relevances = generate_queries(docs, ids, n=40) print(len(questions)) #40 print(questions[0]) ``` You will have a list of generated questions and the associated contexts by running the two cells above. 1. Launch Deep Memory Training ```python # Train deep memory job_id = db.vectorstore.deep_memory.train( queries=questions, relevance=relevances, ) ``` 1. Starting DeepMemory training job Your Deep Lake dataset has been successfully created! Preparing training data for DeepMemory: Creating 20 embeddings in 1 batches of size 20:: 100%|██████████| 1/1 [06:36<00:00, 396.77s/it] DeepMemory training job started. Job ID: 657b3083d528b0fd224173c6 ```python # During training you can check the status of the training run db.vectorstore.deep_memory.status(job_id=""657b3083d528b0fd224173c6"") ``` ```bash -------------------------------------------------------------- | 657b3083d528b0fd224173c6 | -------------------------------------------------------------- | status | completed | -------------------------------------------------------------- | progress | eta: 0.9 seconds | | | recall@10: 60.00% (+25.00%) | -------------------------------------------------------------- | results | recall@10: 60.00% (+25.00%) | -------------------------------------------------------------- ``` We see an increase of 25% in recall@10 after finetuning. 1. Run a Deep Memory-enabled inference by setting `deep_memory=True`. ```python # Define your question query = ""What is the role of the 'encode' method in tokenizers?"" # Run similarity search db.similarity_search(query=query, deep_memory=True, k=1) # Perform the search within search_results = db.similarity_search(query=""What is the role of the 'encode' method in tokenizers?"", deep_memory=False, k=3) # Print the search results print(search_results) # Search results: [Document(page_content='A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most\nof the tokenizers are available in two flavors: a full python implementation and a “Fast” implementation based on the\nRust library 🤗 Tokenizers. The “Fast” implementations allows:\na significant speed-up in particular when doing batched tokenization and additional methods to map between the original string (character and words) and the token space (e.g. getting the\nindex of the token comprising a given character or the span of characters corresponding to a given token).\nThe base classes PreTrainedTokenizer and PreTrainedTokenizerFast\nimplement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and\n“Fast” tokeni', metadata={'Unnamed: 0': 16245, 'title': 'Tokenizer', 'url': 'https://huggingface.co/docs/transformers/main/en/main_classes/tokenizer#tokenizer', 'source': 'hf_transformers'})] ``` 1. Now, let's run a quantitative evaluation on another set of synthetically generated test queries. ```python # Generate validation queries validation_questions, validation_relevances = generate_queries(docs, ids, n=100) # Launch the evaluation function recalls = db.vectorstore.deep_memory.evaluate( queries=validation_questions, relevance=validation_relevances, embedding_function=openai_embeddings.embed_documents, ) ``` ```bash Embedding queries took 0.82 seconds ---- Evaluating without Deep Memory ---- Recall@1: 27.0% Recall@3: 42.0% Recall@5: 42.0% Recall@10: 50.0% Recall@50: 67.0% Recall@100: 72.0% ---- Evaluating with Deep Memory ---- Recall@1: 32.0% Recall@3: 45.0% Recall@5: 48.0% Recall@10: 55.0% Recall@50: 69.0% Recall@100: 73.0% ``` Even with our new test dataset, we notice higher recall values using Deep Memory. Comparing these results with the training dataset highlights how a **query-context dataset** has better quality and represents your use case.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320344-when-to-do-what-using-llms-rag-vs-fine-tuning-vs-deep-memory-vs-training-llm-from-scratch 217,Fine-tuning vs RAG; Introduction to Activeloop’s D,"# Conclusion In this lesson, we explored the optimization techniques for large language models, covering prompt engineering as a first way to maximize LLM performance, fine-tuning, and Retrieval-Augmented Generation (RAG) for integrating external, up-to-date knowledge. We also discussed combining fine-tuning with RAG for complex, domain-specific applications requiring considerable resources. A significant focus was on Activeloop's Deep Memory, which was integrated into RAG systems to enhance embedding retrieval accuracy. Deep Memory outperforms traditional methods like BM25 using lexical search and vector search using cosine similarity. We demonstrated it by getting higher recall values. It also efficiently reduces token usage in LLM prompts compared to query reformulation or transformation. This approach addresses key embedding retrieval challenges and signals a promising future for increasingly capable and versatile LLMs. --- **RESOURCES** - Colab with the lesson code [Google Colaboratory](https://colab.research.google.com/drive/1dgNwwCuA4zqtQDRuyzqd53PbG3PlRHul#scrollTo=lXJYCiwNPbOL) - A Survey of Techniques for Maximizing LLM Performance from OpenAI [A Survey of Techniques for Maximizing LLM Performance](https://www.youtube.com/watch?v=ahnGLM-RC1Y) - Deep Memory Blog Post [Use Deep Memory to Boost RAG Apps' Accuracy by up to +22%](https://www.activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/) - Deep Memory Tutorial [Improving Search Accuracy using Deep Memory](https://docs.activeloop.ai/example-code/tutorials/vector-store/improving-search-accuracy-using-deep-memory) - Llama-index and Deep Memory [How to get +15% RAG hit_rate improvement for question answering on documentation? - LlamaIndex 🦙 0.9.15.post2](https://gpt-index.readthedocs.io/en/stable/examples/retrievers/deep_memory.html)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51320344-when-to-do-what-using-llms-rag-vs-fine-tuning-vs-deep-memory-vs-training-llm-from-scratch 218,Advanced Retrieval Augmented Generation (RAG) for,"# Advanced Retrieval Augmented Generation (RAG) for Pill Searching This project exploits the most advanced artificial intelligence techniques, specifically those relating to NLP and computer vision, and is made available to healthcare, allowing the user to take a photo of a pill and find information about it. The goal is to upload a photo of a pill and recognize it. To obtain noteworthy results it was decided to divide the problem into different phases which will be explained in the following paragraphs.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51579074-advanced-retrieval-augmented-generation-rag-for-pharmaceuticals-pill-search 219,Advanced Retrieval Augmented Generation (RAG) for,"# Advanced Retrieval Augmented Generation (RAG) for Pill Searching ## Segmentation Initially the image is segmented so that the background does not generate false positives or false negatives, for this phase an algorithm called FastSAM was used. This algorithm is able to perform well on both GPU and CPU and has some characteristics to consider: - **Real-time Solution:** FastSAM capitalizes on the computational prowess of Convolutional Neural Networks (CNNs) to offer a real-time solution for the 'segment anything' task. This feature is particularly beneficial for industrial applications where swift and efficient results are paramount. - **Practical Applications:** FastSAM introduces a novel and practical approach for a wide array of vision tasks, delivering results at a speed that is tens to hundreds of times faster than existing methods, revolutionizing the field. - **Based on YOLOv8-seg:** At its core, FastSAM utilizes YOLOv8-seg, a sophisticated object detector with an integrated instance segmentation branch, enabling it to efficiently generate segmentation masks for all instances in an image. - **Efficiency and Performance:** FastSAM stands out by dramatically lowering computational and resource usage while maintaining high-quality performance. It rivals the performance of SAM but requires substantially Here is an example of how segmentation is performed, which will allow the algorithm of the next phase to focus only on the important part of the image: ![https://github.com/efenocchi/PillSearch-Activeloop/raw/main/normal_image.webp](https://github.com/efenocchi/PillSearch-Activeloop/raw/main/normal_image.webp) ![https://github.com/efenocchi/PillSearch-Activeloop/raw/main/image_masked.webp](https://github.com/efenocchi/PillSearch-Activeloop/raw/main/image_masked.webp) After performing the segmentation, we examine which image in our dataset is similar to the one we just segmented. This practice is performed via a neural network called ResNet-18 and allows you to capture important image information and use it in the similarity search phase. It is important to underline that even the images of the dataset that are being compared were all initially segmented to avoid the problem described above. If you want to know more about how this technique works you can read the article [ResNet-18 from Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385).",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51579074-advanced-retrieval-augmented-generation-rag-for-pharmaceuticals-pill-search 220,Advanced Retrieval Augmented Generation (RAG) for,"# Advanced Retrieval Augmented Generation (RAG) for Pill Searching ## Visual Similarity ResNet-18 is a compelling choice for computing visual similarity between images, such as in our application for identifying similarities between pill images. Its effectiveness lies in its architecture and its capability for feature extraction. Here's a breakdown of why ResNet-18 is well-suited for this task: - **Deep Residual Learning:** ResNet-18, a variant of the Residual Network (ResNet) family, incorporates deep residual learning. In ResNet-18, there are 18 layers, including convolutional layers, batch normalization, ReLU activations, and fully connected layers. - **Feature Extraction:** One of the primary strengths of ResNet-18 is its feature extraction capability. In the context of pill images, ResNet-18 can learn and identify intricate patterns, shapes, and colors that are unique to different pills. During the forward pass, as the image goes through successive layers, the network learns hierarchically more complex and abstract features. Initial layers might detect edges or basic shapes, while deeper layers can identify more specific features relevant to different types of pills. - **Efficiency and Speed:** Despite being deep, ResNet-18 is relatively lightweight compared to other deeper models (like ResNet-50 or ResNet-101). This makes it a good choice for applications where computational resources or inference time might be a concern, without significantly compromising on the accuracy of feature extraction. - **Performance in Feature Embedding:** For tasks like visual similarity, it's essential to have a robust feature embedding, which is a compressed representation of the input image. ResNet-18, due to its deep structure, can create rich, discriminative embeddings. When we input two pill images, ResNet-18 processes them to produce feature vectors. The similarity between these vectors can then be computed using metrics like cosine similarity or Euclidean distance. The closer these vectors are in the feature space, the more similar the images are. This similarity is performed directly in Activeloop's Deep Lake Vector Stores, simply take the input image and pass it to Activeloop and it will return the `n` most similar images that were found. If you want to delve deeper into this topic you can find a guide on `Activeloop` at the [following link](https://docs.activeloop.ai/example-code/getting-started/vector-store/step-4-customizing-vector-stores). Going into a little more detail in this project we can see how visual similarity search is not the only one that has been used. Once the `n` most similar images have been returned they are split into two groups: - the `3` most similar images - the remaining `n - 3` images To talk about the second type of similarity we take an intermediate step and show what the user interface looks like.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51579074-advanced-retrieval-augmented-generation-rag-for-pharmaceuticals-pill-search 221,Advanced Retrieval Augmented Generation (RAG) for,"# Advanced Retrieval Augmented Generation (RAG) for Pill Searching ## Text Extraction and Identification In order to extract the text engraved in the pill, purely computer vision approaches were initially used and subsequently GPT-4 vision was chosen. This SAAS (software as a service) allows us to recover the text present in the pill which will then be compared with those present in the database. If a perfect match occurs, this pill will be identified as the input one, otherwise the closest image will be chosen.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51579074-advanced-retrieval-augmented-generation-rag-for-pharmaceuticals-pill-search 222,Advanced Retrieval Augmented Generation (RAG) for,"# Advanced Retrieval Augmented Generation (RAG) for Pill Searching ## Gradio Interface Gradio is an open-source Python library that provides an easy way to create customizable web interfaces for machine learning models. In this pill project, we have utilized Gradio to build a user-friendly interface that allows users to upload pill images and interact with the pipeline. The results are then displayed back to the user through the Gradio interface and are divided into two different columns: - in the first there are the `3 images` most similar to the input one - in the second there are `3 similar images` to which we must pay attention because they have a description of the pill as different as possible from the one inserted. It is necessary to specify that since the input image has no text but is just an image, the description taken is that of the image whose unique identification code is equal to one of those present in the dataset or, in case there is no exact match, that of the image which is absolutely most similar to the input one. The interface that the user will initially load is as follows: ![https://github.com/efenocchi/PillSearch-Activeloop/raw/main/gradio_ui.webp](https://github.com/efenocchi/PillSearch-Activeloop/raw/main/gradio_ui.webp)",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51579074-advanced-retrieval-augmented-generation-rag-for-pharmaceuticals-pill-search 223,Advanced Retrieval Augmented Generation (RAG) for,"# Advanced Retrieval Augmented Generation (RAG) for Pill Searching ## Activeloop Visualizer To show the images returned after the search, the Activeloop rendering engine called Visualizer was used. This functionality allows us to view the data present in the Deep Lake by loading it in HTML format. It was then possible to embed the Activeloop visualization engine into our RAG applications. ![https://github.com/efenocchi/PillSearch-Activeloop/raw/main/gradio_ui_results.webp](https://github.com/efenocchi/PillSearch-Activeloop/raw/main/gradio_ui_results.webp) In this case for each cell we chose to return only the image we needed but if desired it is possible to view all the images in a single window and move between them using the cursor. If you want to delve deeper into this functionality and integrate it into your application via Python or Javascript code you can find the guide [here](https://docs.activeloop.ai/technical-details/visualizer-integration). Now we can move on to the last phase, the similarity search via description of the pill.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51579074-advanced-retrieval-augmented-generation-rag-for-pharmaceuticals-pill-search 224,Advanced Retrieval Augmented Generation (RAG) for,"# Advanced Retrieval Augmented Generation (RAG) for Pill Searching ## Advanced Retrieval Strategies A technique that is becoming increasingly popular in this period is the Retrieval-Augmented Generation (RAG) which enhances large language models (LLMs) by integrating external authoritative knowledge sources beyond their initial training datasets for response generation. LLMs, trained on extensive data and utilizing billions of parameters, excel in tasks such as question answering, language translation, and sentence completion. RAG builds upon these strengths, tailoring LLMs to particular domains or aligning them with an organization's internal knowledge, without necessitating model retraining. This method provides a cost-efficient solution to refine LLM outputs, ensuring their continued relevance, precision, and utility across diverse applications. There are five key stages within RAG, which in turn will be a part of any larger application you build. These are: - **Loading:** This involves transferring data from its original location, such as text files, PDFs, websites, databases, or APIs, into your processing pipeline. LlamaHub offers a wide range of connectors for this purpose. - **Indexing:** This process entails developing a data structure that supports data querying. In the context of LLMs, it typically involves creating vector embeddings, which are numerical representations of your data's meaning, along with various metadata strategies to facilitate the accurate retrieval of contextually relevant data. - **Storing:** After indexing, it's common practice to store the index and other metadata. This step prevents the need for re-indexing in the future. - **Querying:** Depending on the chosen indexing method, there are multiple ways to deploy LLMs and LlamaIndex structures for querying, including options like sub-queries, multi-step queries, and hybrid approaches. - **Evaluation:** This crucial phase in the pipeline assesses its effectiveness against other methods or following modifications. Evaluation provides objective metrics of the accuracy, reliability, and speed of your system's responses to queries. These processes can be easily and clearly represented by the following diagram in the LLamaIndex guide: [https://camo.githubusercontent.com/32c0e19ac812d4e4d1d6fb581e9fdbbd7a1e41b5f95ec00658e368e09b718062/68747470733a2f2f646f63732e6c6c616d61696e6465782e61692f656e2f6c61746573742f5f696d616765732f62617369635f7261672e706e67](https://camo.githubusercontent.com/32c0e19ac812d4e4d1d6fb581e9fdbbd7a1e41b5f95ec00658e368e09b718062/68747470733a2f2f646f63732e6c6c616d61696e6465782e61692f656e2f6c61746573742f5f696d616765732f62617369635f7261672e706e67) Since we have a description for each pill we used these descriptions as if they were documents in order to then be able to obtain the most similar ones (and therefore also the least similar ones) once the description of the input pill was passed to the model. After initially loading our data onto Activeloop's Deep Lake we can show how the data present in this space is used to do an Advanced Retrieval. ```python import DeepLakeVectorStore def create_upload_vectore_store( chunked_text: list, vector_store_path: Union[str, os.PathLike], filename: str, metadata: Optional[list[dict]] = None, ): vector_store = DeepLakeVectorStore( dataset_path=vector_store_path, runtime={""tensor_db"": True}, overwrite=True, tensor_params=[ {""name"": ""text"", ""htype"": ""text""}, {""name"": ""embedding"", ""htype"": ""embedding""}, {""name"": ""filename"", ""htype"": ""text""}, {""name"": ""metadata"", ""htype"": ""json""}, ], ) vector_store = vector_store.vectorstore vector_store.add( text=chunked_text, embedding_function=embedding_function_text, filename=filename, embedding_data=chunked_text, rate_limiter={ ""enabled"": True, ""bytes_per_minute"": 1500000, ""batch_byte_size"": 10000, }, metadata=metadata if metadata else None, ) ``` For all subsequent steps we used LlamaIndex which is a data framework for LLM-based applications used to capture, structure and access private or domain-specific data. ### Indexing Phase Once we’ve ingested the data, LlamaIndex will help us index the data into a structure that’s easy to retrieve. This involves generating vector embeddings which are stored in a specialized database called a vector store, in our case we stored them into the Deep Lake Vector Store. Indexes can also store a variety of metadata about your data. Under the hood, Indexes store data in Node objects (which represent chunks of the original documents), and expose a Retriever interface that supports additional configuration and automation. This part is made up of two main blocks: - **Embedding**: we used OpenAI's `text-embedding-ada-002` as the embedding model - **Retriever**: we tried different approaches which will be described below ### Retriever Phase Retrievers are responsible for fetching the most relevant context given a",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51579074-advanced-retrieval-augmented-generation-rag-for-pharmaceuticals-pill-search 225,Advanced Retrieval Augmented Generation (RAG) for,"# Advanced Retrieval Augmented Generation (RAG) for Pill Searching ## Advanced Retrieval Strategies user query (or chat message). It can be built on top of indexes, but can also be defined independently. It is used as a key building block in query engines (and Chat Engines) for retrieving relevant context. **Vector Store Index** A VectorStoreIndex is by far the most frequent type of Index you’ll encounter. The Vector Store Index takes your Documents and splits them up into Nodes. It then creates vector embeddings of the text of every node, ready to be queried by an LLM. The vector store index stores each Node and a corresponding embedding in a Vector Store. [https://camo.githubusercontent.com/18409f41016d52d6ad393091d52107485f8eaf669e3ada64206c5b9c9fae9440/68747470733a2f2f646f63732e6c6c616d61696e6465782e61692f656e2f6c61746573742f5f696d616765732f766563746f725f73746f72652e706e67](https://camo.githubusercontent.com/18409f41016d52d6ad393091d52107485f8eaf669e3ada64206c5b9c9fae9440/68747470733a2f2f646f63732e6c6c616d61696e6465782e61692f656e2f6c61746573742f5f696d616765732f766563746f725f73746f72652e706e67) *[Source Image](https://docs.llamaindex.ai/en/latest/_images/vector_store.png)* Querying a vector store index involves fetching the top-k most similar Nodes, and passing those into our Response Synthesis module. [https://camo.githubusercontent.com/155f03542ce748b38e2769f1bc15638afb035d9efd28be6faddf22ae6c49508c/68747470733a2f2f646f63732e6c6c616d61696e6465782e61692f656e2f6c61746573742f5f696d616765732f766563746f725f73746f72655f71756572792e706e67](https://camo.githubusercontent.com/155f03542ce748b38e2769f1bc15638afb035d9efd28be6faddf22ae6c49508c/68747470733a2f2f646f63732e6c6c616d61696e6465782e61692f656e2f6c61746573742f5f696d616765732f766563746f725f73746f72655f71756572792e706e67) *[Source Image](https://docs.llamaindex.ai/en/latest/_images/vector_store_query.png)* **BM25** BM25 is a popular ranking function used by search engines to estimate the relevance of documents to a given search query. It's based on probabilistic models and improves upon earlier models like TF-IDF (Term Frequency-Inverse Document Frequency). BM25 considers factors like term frequency and document length to provide a more nuanced approach to relevance scoring. It handles the issue of term saturation (where the importance of a term doesn't always increase linearly with frequency) and length normalization (adjusting scores based on document length to prevent bias toward longer documents). BM25's effectiveness in various search tasks has made it a standard in information retrieval. To use this retriever we need to take documents from Activeloop's Deep Lake and transform them into nodes, these nodes will then be returned in an orderly manner once a question is asked. This process, as already mentioned previously, exploits the similarity between the description of the pill (which will be passed as a query) and the description of the `n - 3` most similar pills. In the code below we used the name of the images to make a specific query that returned only the values of those images and by extrapolating their description we created the nodes and indexes. ```python def get_index_and_nodes_after_visual_similarity(filenames: list): vector_store = load_vector_store(vector_store_path=VECTOR_STORE_PATH_DESCRIPTION) conditions = "" or "".join(f""filename == '{name}'"" for name in filenames) tql_query = f""select * where {conditions}"" filtered_elements = vector_store.vectorstore.search(query=tql_query) chunks = [] for el in filtered_elements[""text""]: chunks.append(el) string_iterable_reader = download_loader(""StringIterableReader"") loader = string_iterable_reader() documents = loader.load_data(texts=chunks) node_parser = SimpleNodeParser.from_defaults(separator=""\n"") nodes = node_parser.get_nodes_from_documents(documents) # To ensure same id's per run, we manually set them. for idx, node in enumerate(nodes): node.id_ = f""node_{idx}"" llm = OpenAI(model=""gpt-4"") service_context = ServiceContext.from_defaults(llm=llm) index = VectorStoreIndex(nodes=nodes) return index, nodes, service_context, filtered_elements ``` Since we have the `n` most similar images (obtained in the previous step through the visual similarity), we can extract the description of these `n` images and use them to generate the nodes. ### Why do we need to care about different retrieval methods and how are they different from each other? RAG (Retrieval-Augmented Generation) systems retrieve relevant information from a given knowledge base, thereby allowing it to generate factual, contextually relevant, and domain-specific information. However, RAG faces a lot of challenges when it comes to effectively retrieving relevant information and generating high-quality responses. Traditional search engines work by parsing documents into chunks and indexing these chunks. The algorithm then searches this index for relevant results based on a user’s query. Retrieval Augmented Generation is a new paradigm in machine learning that uses large language models (LLMs) to improve search and discovery. The LLMs, like the GPT-4, generate relevant content based on context. The advanced technique utilized in this project is the Hybrid Search. It is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51579074-advanced-retrieval-augmented-generation-rag-for-pharmaceuticals-pill-search 226,Advanced Retrieval Augmented Generation (RAG) for,"# Advanced Retrieval Augmented Generation (RAG) for Pill Searching ## Advanced Retrieval Strategies features of both keyword-based search algorithms with vector search techniques. By leveraging the strengths of different algorithms, it provides a more effective search experience for users. ### Hybrid Fusion Retriever In advanced technique, we merge the vector store based retriever and the BM25 based retriever. This will enable us to capture both semantic relations and keywords in our input queries. Since both of these retrievers calculate a score, we can use the reciprocal rerank algorithm to re-sort our nodes without using additional models or excessive computation. We can see the scheme in the image below taken from the LlamaIndex guide: ![https://github.com/efenocchi/PillSearch-Activeloop/raw/main/hybrid_approach.webp](https://github.com/efenocchi/PillSearch-Activeloop/raw/main/hybrid_approach.webp) ### BM25 Retriever + Re-Ranking technique (classic approach with BM25) In this first case we use a classic retriever based only on the **BM25 retriever**, the nodes generated by the query will then be re-ranked and filtered. This allows us to keep the intermediate top-k values large and filter out unnecessary nodes. ```python _, nodes, service_context = get_index_and_nodes_from_activeloop( vector_store_path=VECTOR_STORE_PATH_BASELINE ) ``` These nodes will then be used by the bm25 retriever. It is useful to note that the index variable is not currently used because bm25 manages this entire part internally. ``` bm25_retriever = BM25Retriever.from_defaults(nodes=nodes, similarity_top_k=10) ``` Now that we have the nodes we need to obtain the similarity by passing the description of the input image as a query. ``` nodes_bm25_response = bm25_retriever.retrieve(description) ``` ClassicRetrieverBM25 is an object that we used to manage the creation of the BM25-based retriever in a more orderly way. ```python class ClassicRetrieverBM25(BaseRetriever): def __init__(self, bm25_retriever): self.bm25_retriever = bm25_retriever super().__init__() def _retrieve(self, query, **kwargs): bm25_nodes = self.bm25_retriever.retrieve(query, **kwargs) all_nodes = [] node_ids = set() for n in bm25_nodes: if n.node.node_id not in node_ids: all_nodes.append(n) node_ids.add(n.node.node_id) return all_nodes ``` The final part of this process is generated by the re-renker which takes care of ordering the nodes according to their score. ```python reranker = SentenceTransformerRerank(top_n=4, model=""BAAI/bge-reranker-base"") # nodes retrieved by the bm25 retriever with the reranker reranked_nodes_bm25 = reranker.postprocess_nodes( nodes_bm25_response, query_bundle=QueryBundle(QUERY), ) ``` ### Advanced - Hybrid Retriever + Re-Ranking technique with BM25 and the vector retriever Here we extend the base retriever class and create a custom retriever that always uses the **vector retriever** and **BM25 retreiver**. In this test, the previous approach was used but the vector store was added as a retriever which uses the index variable returned by the `get_index_and_nodes_after_visual_similarity` function to manage the nodes. At the beginning, we create the standard retriever: ```python index, nodes, _ = get_index_and_nodes_from_activeloop( vector_store_path=VECTOR_STORE_PATH_COMPLETE_SEQUENTIALLY ) self.vector_retriever = index.as_retriever(similarity_top_k=2) self.bm25_retriever = BM25Retriever.from_defaults( nodes=nodes, similarity_top_k=10 ) ``` The nodes will then be calculated both via the vector store and bm25 and once they are all put together they will be re-ranked and filtered. This process allows us to get the best of both models and filter out the `k best nodes`. ```python reranked_nodes_bm25 = self.reranker.postprocess_nodes( self.nodes_bm25_response, query_bundle=QueryBundle(QUERY), ) print(""Reranked Nodes BM25\n\n"") for el in reranked_nodes_bm25: print(f""{el.score}\n"") reranked_nodes_vector = self.reranker.postprocess_nodes( self.nodes_vector_response, query_bundle=QueryBundle(QUERY), ) print(""Reranked Nodes Vector\n\n"") for el in reranked_nodes_vector: print(f""{el.score}\n"") unique_nodes = keep_best_k_unique_nodes( reranked_nodes_bm25, reranked_nodes_vector ) print(""Unique Nodes\n\n"") for el in unique_nodes: print(f""{el.id} : {el.score}\n"") ``` ### Advanced - Hybrid Retriever + Re-Ranking technique with BM25 and the vector retriever and QueryFusionRetriever In the last case we can see how through the QueryFusionRetriever object the entire process described previously can be represented with a single function. We fuse our index with a BM25 based retriever, this will enable us to capture both semantic relations and keywords in our input queries. Since both of these retrievers calculate a score, we can use the reciprocal rerank algorithm to re-sort our nodes without using an additional models or excessive computation.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51579074-advanced-retrieval-augmented-generation-rag-for-pharmaceuticals-pill-search 227,Advanced Retrieval Augmented Generation (RAG) for,"# Advanced Retrieval Augmented Generation (RAG) for Pill Searching ## Advanced Retrieval Strategies ```python from llama_index.retrievers import BM25Retriever vector_retriever = index.as_retriever(similarity_top_k=2) bm25_retriever = BM25Retriever.from_defaults( docstore=index.docstore, similarity_top_k=2 ) ``` Here we can create our fusion retriever, which will return the top-2 most similar nodes from the 4 returned nodes from the retrievers: ```python from llama_index.retrievers import QueryFusionRetriever retriever = QueryFusionRetriever( [vector_retriever, bm25_retriever], similarity_top_k=2, num_queries=4, # set this to 1 to disable query generation mode=""reciprocal_rerank"", use_async=True, verbose=True, ) ``` Finally, we can perform the query search: ``` retriever.retrieve(description) ```",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51579074-advanced-retrieval-augmented-generation-rag-for-pharmaceuticals-pill-search 228,Advanced Retrieval Augmented Generation (RAG) for,"# Advanced Retrieval Augmented Generation (RAG) for Pill Searching ## Metrics and Conclusions Advanced retrievers, like the ones we've been discussing, represent a significant leap in our ability to process and analyze vast amounts of data. These systems, armed with sophisticated algorithms like BM25, vector store, and the latest developments in embedding models from OpenAI, are not just tools; they are gateways to a new era of information accessibility and knowledge discovery. The power of these retrievers lies in their understanding of context, their ability to sift through massive data sets to find relevant information, and their efficiency in providing accurate results. They have become indispensable in sectors where precision and speed are essential. In the healthcare sector, for example, their application to identify and cross-reference medical information can represent a game changer, improving both the quality of care and patient safety. All tested models work very well for our use case and in favor of this thesis are the following metrics: **BM25 Retriever + Re-Ranking technique (classic approach with BM25)** ``` retrievers hit_rate mrr top-2 eval 0.964643 0.944501 ``` **Advanced - Hybrid Retriever + Re-Ranking technique with BM25 and the vector retriever** ``` retrievers hit_rate mrr top-2 eval 0.975101 0.954078 ``` **Advanced - Hybrid Retriever + Re-Ranking technique with BM25 and the vector retriever and QueryFusionRetriever** ``` retrievers hit_rate mrr top-2 eval 0.977138 0.954235 ``` Where the **hit_rate** and **MRR (Mean Reciprocal Rank)** are two metrics commonly used to evaluate the performance of information retrieval systems, search algorithms, and recommendation systems. **Hit Rate:** The hit rate is a measure of accuracy, specifically the proportion of times a system successfully retrieves relevant documents or items. It's often used in the context of recommendation systems to evaluate if any of the recommended items are relevant. A hit is usually defined by whether the relevant item appears in the top-N recommendations or retrieval results. For instance, if the system provides 10 recommendations, and at least one of them is relevant, it's considered a hit. The hit rate is the number of hits divided by the total number of recommendations or queries made. **Mean Reciprocal Rank (MRR):** MRR is a statistic that measures the average of the reciprocal ranks of results for a sample of queries. The reciprocal rank of a query response is the multiplicative inverse of the rank of the first correct answer. For example, if the first relevant document for a query is located at the third position in a list of ranked items, the reciprocal rank is 1/3. MRR is calculated by taking the average of the reciprocal ranks across all queries. It gives a higher score to systems where the relevant item appears earlier in the recommendation or search results list, therefore considering the ranking of results, unlike hit rate which is binary. Both metrics are critical for assessing how effectively a system presents the relevant information to users, with the hit rate focusing on presence in the results and MRR on the rank or position of the first relevant result.",advanced_rag_course,https://learn.activeloop.ai/courses/take/rag/multimedia/51579074-advanced-retrieval-augmented-generation-rag-for-pharmaceuticals-pill-search