{
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# Build Smart Document Understanding Agents with TensorLake and OpenAI Agent SDK\n",
        "*Author: [Antaripa Saha](https://x.com/doesdatmaksense)*"
      ],
      "metadata": {
        "id": "I30xTVNUN-Cj"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "In this example you will learn how to build smart agents that understand documents using TensorLake and OpenAI Agent SDK. To learn more about Agentic Applications [check out the Tensorlake docs](https://docs.tensorlake.ai/use-cases/agents-and-rag-workflows/agents-understand-docs)"
      ],
      "metadata": {
        "id": "yrUAyOb0MCLs"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Step 0: Prerequisites\n",
        "\n",
        "1. Install the [Tensorlake SDK](https://pypi.org/project/tensorlake/)\n",
        "2. Import necessary packages\n",
        "3. Set your [Tensorlake API Key](https://docs.tensorlake.ai/platform/authentication)\n",
        "\n",
        "**Note:** Learn more with the [Tensorlake docs](https://docs.tensorlake.ai/)."
      ],
      "metadata": {
        "id": "UNl3bxKn-jTf"
      }
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "R9anspFSCHJc"
      },
      "outputs": [],
      "source": [
        "!pip install tensorlake openai-agents"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "from tensorlake.documentai import DocumentAI\n",
        "from tensorlake.documentai.models import (\n",
        "    ParsingOptions,\n",
        "    StructuredExtractionOptions,\n",
        "    EnrichmentOptions,\n",
        "    ParseStatus,\n",
        "    ChunkingStrategy,\n",
        "    TableOutputMode,\n",
        "    TableParsingFormat,\n",
        "    PartitionStrategy\n",
        ")\n",
        "\n",
        "# openai agent sdk\n",
        "from agents import Agent, Runner\n",
        "\n",
        "from pydantic import BaseModel, Field\n",
        "from typing import List, Optional\n",
        "from enum import Enum\n",
        "\n",
        "import time\n",
        "import json"
      ],
      "metadata": {
        "id": "ohxd50T3BR1w"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qbPh7a-aC7Fw"
      },
      "outputs": [],
      "source": [
        "%env TENSORLAKE_API_KEY=YOUR_TENSORLAKE_API_KEY\n",
        "%env OPENAI_API_KEY=YOUR_OPENAI_API_KEY"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Step 1: Specify Structured Data Extraction\n",
        "\n",
        "Create a simple Pydantic model to specify what structured data you want extracted from the document"
      ],
      "metadata": {
        "id": "FqVKAM-0-4zZ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "class ResearchPaperSchema(BaseModel):\n",
        "    \"\"\"Schema focusing on the most critical information from the research papers\"\"\"\n",
        "\n",
        "    title: str = Field(description=\"Title of the research paper\")\n",
        "    authors: List[str] = Field(description=\"List of author names\")\n",
        "    abstract: str = Field(description=\"Abstract of the paper\")\n",
        "\n",
        "    research_problem: str = Field(description=\"What problem does this paper solve?\")\n",
        "    main_approach: str = Field(description=\"What is the main approach or method used?\")\n",
        "    key_contributions: List[str] = Field(description=\"What are the 3-5 most important contributions?\")\n",
        "\n",
        "    methodology_summary: str = Field(description=\"Brief summary of the research methodology\")\n",
        "    datasets_used: Optional[List[str]] = Field(description=\"Datasets mentioned in the paper\", default=None)\n",
        "    evaluation_metrics: Optional[List[str]] = Field(description=\"How do they measure success?\", default=None)\n",
        "\n",
        "    related_work_summary: Optional[str] = Field(description=\"Brief summary of how this relates to existing work\", default=None)\n",
        "    limitations: Optional[List[str]] = Field(description=\"What limitations do the authors acknowledge?\", default=None)"
      ],
      "metadata": {
        "id": "q5-LouGfCL95"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Step 2: Parse the Document\n",
        "To use the Tensorlake Python SDK, you need to:\n",
        "\n",
        "1. Create a Tensorlake Client\n",
        "2. Specify a file path of the document that you want to parse\n",
        "3. Upload the document to Tensorlake Cloud\n",
        "4. Specify Parsing Options, if nothing specified then default options will be used.\n",
        "5. Initiate the parsing job and wait until it compeltes successfully"
      ],
      "metadata": {
        "id": "NvSZdD_MMvol"
      }
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "As25OAO3ESre",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "599b7e05-d310-449e-9d04-3449d96296d1"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Parse job submitted with ID: parse_mhwzN6NGcjbMhDDLw8wfG\n",
            "waiting 5 s…\n",
            "parse status: processing\n",
            "waiting 5 s…\n",
            "parse status: processing\n",
            "waiting 5 s…\n",
            "parse status: processing\n",
            "waiting 5 s…\n",
            "parse status: processing\n",
            "waiting 5 s…\n",
            "parse status: successful\n"
          ]
        }
      ],
      "source": [
        "# Create a Tensorlake Client, this will reference the `TENSORLAKE_API_KEY` environment variable you set above\n",
        "doc_ai = DocumentAI()\n",
        "\n",
        "file_path = \"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/Jasper%20and%20Stells-%20distillation%20of%20SOTA%20embedding%20models.pdf\"\n",
        "\n",
        "# Configure parsing options for academic papers\n",
        "parsing_options = ParsingOptions(\n",
        "    chunking_strategy=ChunkingStrategy.PAGE\n",
        ")\n",
        "\n",
        "# Configure structured extraction\n",
        "structured_extraction_options = StructuredExtractionOptions(\n",
        "    schema_name=\"Research Paper Analysis\",\n",
        "    json_schema=ResearchPaperSchema\n",
        ")\n",
        "\n",
        "# Parse the document with the specified extraction options\n",
        "parse_id = doc_ai.parse(file_path, parsing_options=parsing_options, structured_extraction_options=[structured_extraction_options])\n",
        "\n",
        "print(f\"Parse job submitted with ID: {parse_id}\")\n",
        "\n",
        "# Wait for completion\n",
        "result = doc_ai.wait_for_completion(parse_id)"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Understanding Tensorlake Parsing Output\n",
        "\n",
        "In one single DocumentAI API call, Tensorlake returns both the full markdown content of the document and the structured data in JSON format."
      ],
      "metadata": {
        "id": "qPIZWDLz_IDL"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Review the Structured Data"
      ],
      "metadata": {
        "id": "6qb2cfQPxavw"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "print(json.dumps(result.structured_data[0].data, indent=2))"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "mxXB6FGyucP2",
        "outputId": "de97f24c-84ee-45d6-a37d-ece51a042488"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "{\n",
            "  \"abstract\": \"A crucial component in many deep learning applications, such as Frequently Asked Questions (FAQ) and Retrieval-Augmented Generation (RAG), is dense retrieval. In this process, embedding models transform raw text into numerical vectors. However, the embedding models that currently excel on text embedding benchmarks, like the Massive Text Embedding Benchmark (MTEB), often have numerous parameters and high vector dimensionality. This poses challenges for their application in real-world scenarios. To address this issue, we propose a novel multi-stage distillation framework that enables a smaller student embedding model to distill multiple larger teacher embedding models through three carefully designed losses. Meanwhile, we utilize Matryoshka Representation Learning (MRL) to reduce the vector dimensionality of the student embedding model effectively. Our student model named Jasper with 2 billion parameters, built upon the Stella embedding model, obtained the No.3 position on the MTEB leaderboard (as of December 24, 2024), achieving an average 71.54 score across 56 datasets. We have released the model and data on the Hugging Face Hub, and the training codes are available in this project repository.\",\n",
            "  \"authors\": [\n",
            "    \"Dun Zhang\",\n",
            "    \"Jiacheng Li\",\n",
            "    \"Ziyang Zeng\",\n",
            "    \"Fulong Wang\"\n",
            "  ],\n",
            "  \"datasets_used\": [\n",
            "    \"sentence-transformers/embedding-training-data\",\n",
            "    \"BAAI/Infinity-MM\"\n",
            "  ],\n",
            "  \"evaluation_metrics\": [\n",
            "    \"average score on MTEB leaderboard across 56 datasets\"\n",
            "  ],\n",
            "  \"key_contributions\": [\n",
            "    \"Propose a novel multi-stage distillation framework for reducing model size without significantly losing performance.\",\n",
            "    \"Develop the Jasper model with 2 billion parameters that perform comparably to models with 7 billion parameters.\",\n",
            "    \"Use Matryoshka Representation Learning (MRL) to reduce vector dimensionality efficiently.\",\n",
            "    \"Publication of three tailored loss functions to enhance distillation learning.\",\n",
            "    \"Release of model and data on Hugging Face Hub.\"\n",
            "  ],\n",
            "  \"limitations\": [\n",
            "    \"The paper does not conduct experiments to evaluate the proposed approach for self-distillation in detail.\",\n",
            "    \"Stage 4 only achieves preliminary alignment between text and image modalities, indicating room for improvement.\"\n",
            "  ],\n",
            "  \"main_approach\": \"The main approach is a multi-stage distillation framework that involves distilling information from larger teacher models to a smaller student model using three specific loss functions, combined with Matryoshka Representation Learning (MRL) for dimensionality reduction.\",\n",
            "  \"methodology_summary\": \"The methodology involves a four-stage distillation process where a smaller student model distills information from larger teacher models using specifically designed loss functions to learn effective text representations while employing MRL for dimensionality reduction. Subsequent stages focus on enhanced dimension reduction and unlocking multimodal potential through incorporating vision encodings.\",\n",
            "  \"related_work_summary\": \"The paper builds on existing dense retrieval and knowledge distillation methodologies, emphasizing enhanced retrieval training efficiency and effectiveness, with references to prior works on knowledge distillation and representation learning.\",\n",
            "  \"research_problem\": \"The paper addresses the challenge of deploying high-performing dense retrieval models with large parameters and vector dimensions in practical applications by proposing a distillation framework to reduce model size while maintaining performance.\",\n",
            "  \"title\": \"Jasper and Stella: distillation of SOTA embedding models\"\n",
            "}\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Review the Markdown Chunks"
      ],
      "metadata": {
        "id": "-jA57HaIAonc"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Get the markdown from extracted data\n",
        "for index, chunk in enumerate(result.chunks):\n",
        "    print(f\"Chunk {index}:\")\n",
        "    print(chunk.content)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "OwYUCCZLArHs",
        "outputId": "f1ea33bd-09f6-4d75-bb0b-d273ab87dbfd"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Chunk 0:\n",
            "\n",
            "arXiv:2412.19048v2 [cs.IR] 23 Jan 2025\n",
            "\n",
            "## Jasper and Stella: distillation of SOTA embedding models\n",
            "\n",
            "Dun Zhang1, Jiacheng Li1; Ziyang Zeng1,2, Fulong Wang1 1 NovaSearch Team\n",
            "2Beijing University of Posts and Telecommunications infgrad@163.com jcli.nlp@gmail.com ziyang1060@bupt.edu.cn wangfl1989@163.com\n",
            "\n",
            "## Abstract\n",
            "\n",
            "A crucial component in many deep learning applications, such as Frequently Asked Ques- tions (FAQ) and Retrieval-Augmented Gener- ation (RAG), is dense retrieval. In this pro- cess, embedding models transform raw text into numerical vectors. However, the embed- ding models that currently excel on text embed- ding benchmarks, like the Massive Text Embed- ding Benchmark (MTEB), often have numer- ous parameters and high vector dimensionality. This poses challenges for their application in real-world scenarios. To address this issue, we propose a novel multi-stage distillation frame- work that enables a smaller student embedding model to distill multiple larger teacher embed- ding models through three carefully designed losses. Meanwhile, we utilize Matryoshka Rep- resentation Learning (MRL) to reduce the vec- tor dimensionality of the student embedding model effectively. Our student model named Jasper with 2 billion parameters, built upon the Stella embedding model, obtained the No.3 po- sition on the MTEB leaderboard (as of Decem- ber 24, 2024), achieving average 71.54 score across 56 datasets. We have released the model and data on the Hugging Face Hub 1 2, and the training codes are available in this project repository 3.\n",
            "\n",
            "## 1 Introduction\n",
            "\n",
            "With the rapid development of natural language pro- cessing technologies, text embedding models play a crucial role in text representation (Kashyap et al., 2024), information retrieval (Zhao et al., 2024a), and text generation tasks (Gao et al., 2023). By mapping words, sentences, or documents into a\n",
            "high-dimensional continuous space, these models bring similar texts closer together in their vector representations, thereby not only enhancing the manipulability of textual data but also significantly improving the performance of various downstream tasks (Agarwal et al., 2024; Wang et al., 2024; Zhou et al., 2024).\n",
            "However, embedding models that demonstrate excellent performance on the METB leaderboard4 (Muennighoff et al., 2023) usually contain a large number of parameters and high vector dimensions. For instance, both NV-Embed-v2 (Lee et al., 2024; Moreira et al., 2024) and bge-en-icl (Xiao et al., 2023; Li et al., 2024) have 7 billion parameters and 4096-dimensional vector representations. These characteristics lead to slow inference speeds and high storage costs, posing a significant challenge to their direct practical application.\n",
            "To address the aforementioned challenges, we propose a novel multi-stage knowledge distillation framework for embedding models. Knowledge dis- tillation is widely recognized for enhancing the effectiveness of dense retrieval training (Hofstätter et al., 2021; Lin et al., 2021). In our framework, we introduce three carefully designed loss func- tions to distill knowledge from the teacher model to the student model. These loss functions shift from a specific constraint to a broader constraint. The first, cosine loss, calculates the absolute dif- ference in text representations between the student and teacher models. The pointwise signal derived from a single text is straightforward, yet its lim- ited optimization direction tends to readily lead to overfitting on the training data. Thus, we introduce the similarity loss, which measures the semantic discrepancies between the student and teacher mod- els from a text-pair perspective. Additionally, we design the relative similarity distillation loss to fur- ther leverage relative ranking information. This\n",
            "\n",
            "*Dun Zhang and Jiacheng Li make equal contributions to this work.\n",
            "\n",
            "1https://huggingface.co/infgrad/jaspe r_en_vision_language_v1\n",
            "\n",
            "2https://huggingface.co/datasets/infg rad/ jasper_text_distill_dataset\n",
            "\n",
            "3https : //github. com/NLPJCL/RAG-Retriev al\n",
            "\n",
            "4https://huggingface.co/spaces/mteb/l eaderboard\n",
            "\n",
            "Chunk 1:\n",
            "ensures that the student model learns the teacher's ranking preferences across all potential positive and negative text pairs within the batch, thereby improving the robustness of embedding learning.\n",
            "To further improve the performance of the stu- dent model, we utilize multiple powerful large em- bedding models as teachers. Specifically, we con- catenate the vectors produced by all teacher models to create the final ground truth, which inevitably leads to an increase in the student model's vector dimension. To address this issue, we adopt a Ma- tryoshka Representation Learning (MRL)-based training method (Kusupati et al., 2024) to effec- tively compress the student model's vector rep- resentation. Additionally, to develop the multi- modal retrieval capability of our student model, we integrate a vision encoder and introduce a self- distillation mechanism to align the visual embed- dings with the textual embeddings. In terms of the overall training process, we employ a 4-stage dis- tillation approach to progressively transfer knowl- edge from the teacher models to the student model. Each stage focuses on specific aspects, combining three loss functions and fine-tuning different pa- rameters of the student model to ensure a smooth and effective distillation process.\n",
            "Experimental results on the MTEB leaderboard demonstrate that our student model named Jasper with 2 billion (2B) parameters, primarily built upon the foundation of the Stella embedding model, de- livers excellent performance (average 71.54 score across 56 datasets) comparable to other embedding models with 7 billion (7B) parameters, and sig- nificantly outperforms models with fewer than 2B parameters.\n",
            "The main contributions of this paper can be sum- marized as follows:\n",
            "(1) We propose a novel multi-stage distillation framework, which enables a smaller student embedding model to effectively distill knowl- edge from multiple larger teacher embedding models through three carefully designed loss functions.\n",
            "(2) Our 2B Jasper model obtained the No.3 posi- tion on the MTEB leaderboard (as of Decem- ber 24, 2024), producing results comparable to other top-ranked 7B embedding models and significantly outperforming other models with less than 2B parameters.\n",
            "\n",
            "## 2 Methods\n",
            "\n",
            "\n",
            "## 2.1 Definitions\n",
            "\n",
            "For a more comprehensive introduction of our model and distillation framework, we make the following definitions:\n",
            "· Student Model: The text embedding model that is the subject of training, tasked with learning to produce effective vector represen- tations.\n",
            "· Teacher Model: The state-of-the-art (SOTA) embedding model serving as a teacher, guid- ing the student model in generating effective vectors. Notably, the teacher model will not be trained.\n",
            "· Sx: The normalized vector representation of a text x produced by the student model.\n",
            "· tx: The vector representation of the same text x, first normalized, then concatenated, and normalized again, produced by multiple teacher models.\n",
            "· Sx: A matrix of normalized vector represen- tations for a batch of text X produced by the student model.\n",
            "· Tx: A corresponding matrix of vector rep- resentations for the same batch of text X, first normalized, then concatenated, and subse- quently normalized again, generated by multi- ple teacher models.\n",
            "\n",
            "## 2.2 Model Architecture\n",
            "\n",
            "Our student model architecture follows the sim- ple and standard design of combining a language model with a vision encoder. As shown in Figure 1, it consists of four components:\n",
            "1. A encoder-based language model that gener- ates text embeddings through mean pooling.\n",
            "2. A vision encoder that independently maps im- ages into vision token embeddings.\n",
            "3. A pooler that maps vision token embed- dings to the same dimension as the language model's input textual embeddings, while re- ducing the length of visual token sequences.\n",
            "4. Several fully connected (FC) layers that project the embeddings to a specific dimen- sion for the final output.\n",
            "\n",
            "Chunk 2:\n",
            "Figure 1: The model architecture of Jasper model.\n",
            "\n",
            "### Figure \n",
            "12288 dim vector\n",
            "1024 dim vector\n",
            "512 dim vector\n",
            "256 dim vector\n",
            "FC1\n",
            "FC2\n",
            "FC3\n",
            "FC4\n",
            "Mean Polling\n",
            "...\n",
            "Stella Encoder\n",
            "AvgPool2d\n",
            "Siglip Vision Encoder\n",
            "Stella Input Embedding\n",
            "Image\n",
            "Text\n",
            "\n",
            "\n",
            "## 2.3 Stage 1&2: Distillation from Multiple Teachers\n",
            "\n",
            "In the first two stages of distillation, we use a fully connected layer to map the vectors of the student model onto the dimensions of the teacher mod- els. Specifically, we employ NV-Embed-v25 and stella_en_1.5B_v56 as teacher models, which have vector dimensions of 4096 and 8192, respectively. After the mapping process, the student model's vector dimension is adjusted to 12288, equal to the combined vector dimensions of two teacher models (4096 + 8192).\n",
            "The objective of the first two stages is to enable the student model to effectively learn text represen- tations from multiple teacher models by aligning its output vectors with the corresponding teacher vectors. To achieve this goal, we carefully design three loss functions that progress from a specific to a broader perspective. The first loss function is cosine loss, which is formulated as follows:\n",
            "Lcosine = 2 x\n",
            "1 - Sx . tr. (1)\n",
            "The Lcosine is designed to minimize the angular difference between student and teacher vectors in the high-dimensional space, with the aim of align- ing their absolute text representations. However, the Lcosine value generally does not converge to zero, suggesting a persistent angular discrepancy between the student and the teachers. Meanwhile, the pointwise signal derived from a single text has a limited optimization direction, which can easily lead to overfitting on the training data.\n",
            "Lsim = MSE(SxSk,TxTÆ)) (2)\n",
            "To complement the limitations of Lcosine, we in- troduce the second loss function, similarity loss, as defined in (2), which models the semantic matching differences between the student and teacher mod- els from a text-pair perspective. This loss function ensures a relatively consistent judgment of simi- larity between the student model and the teacher models, without enforcing an absolute fit between the student model and the teacher model.\n",
            "N 1\n",
            "Cresim = > MAX(0, ti-tj>tm.tn\n",
            "Sm . Sn - Si Sj + margin) (3)\n",
            "To further leverage relative comparison signals, inspired by CoSENT loss7, we propose the third loss function, relative similarity distillation loss, as defined in (3). For each batch of text data, we em- ploy teacher models to automatically generate soft labels for all text pairs, thereby identifying poten- tial positive and negative samples. Subsequently, the student model is trained to ensure that the simi- larity between positive pairs exceeds that between negative pairs, with the margin hyperparameter controlling the degree of this difference. If the batch size is m, the total number of text pairs (i.e., N) is given by C22 .\n",
            "C = \\1Lcosine + 12Lsim + AzLresim (4)\n",
            "The final loss £ is a weighted sum of the afore- mentioned three loss functions. where X1,12, and 13 are hyperparameters. The biggest advantage of distillation vectors is that we do not need any supervised data. Without considering resource con- straints, we can use trillions of unsupervised texts\n",
            "\n",
            "5https: //huggingface. co/nvidia/NV-Emb ed-v2\n",
            "\n",
            "'https://huggingface.co/dunzhang/stel la_en_1.5B_v5\n",
            "\n",
            "\"https://spaces.ac.cn/archives/8847\n",
            "\n",
            "Chunk 3:\n",
            "for distillation training to achieve extreme perfor- mance for a given model size.\n",
            "Notably, the main difference between stage 1 and stage 2 lies in the trained parameters. In stage 1, only the fully connected layer (FC1) is trained, whereas in stage 2, both the fully connected layer (FC1) and the last three encoder layers of the stu- dent model are trained.\n",
            "\n",
            "## 2.4 Stage 3: Dimension Reduction\n",
            "\n",
            "In the first two stages, the student model is trained by learning from the teacher models. Specifically, we concatenate the vectors produced by the two teacher models, resulting in a student model vector with a dimensionality of 12,288 (4,096 + 8,192), which is impractically large. Inspired by MRL (Kusupati et al., 2024), we introduce three addi- tional, independent fully connected layers (FC2, FC3, and FC4) to generate low-dimensionality vec- tors, each achieving a different level of dimension reduction. For instance, by incorporating the fully connected layer FC3 with a shape of (15368, 512), we obtain a more manageable 512-dimensional vec- tor space.\n",
            "For the three FC layers, since the dimensions of the reduced vectors do not align with those of the concatenated teacher vector, the Lcosine is omitted and only the Lsim and Cresim are utilized. To en- sure the accuracy of the vectors generated from the FC1 layer (i.e., the 12288-dimensional vec- tors), they continue to be trained using all three loss functions. During this stage, all parameters of the student model are trained.\n",
            "In addition to the previously mentioned dimen- sion reduction method, we present a potentially promising approach to self-distillation, where the aligned vectors from an earlier stage of the student model's training serve as teacher vectors. Specifi- cally, we propose to utilize the 12288-dimensional vectors output from the FC1 layer to serve as teach- ers for the shorter vectors generated by the other three FC layers. This approach provides a unique advantage by enabling the reduction of the dimen- sionality of any embedding model, utilizing only unsupervised data and the model itself. Given that this paper primarily focuses on introducing the training methods of the Stella and Jasper mod- els, we did not conduct experiments to evaluate the specific merits of this proposed approach.\n",
            "\n",
            "## 2.5 Stage 4: Unlock Multimodal Potential\n",
            "\n",
            "In stage 4, we leverage image-caption pairs as the training dataset, focusing exclusively on training the visual encoder while keeping the other compo- nents frozen. The training process is based on self- distillation, where the caption's vector representa- tion serves as the teacher vector, and the image's vector representation acts as the student vector. All fully connected layers introduced in previous stages are employed to generate multiple pairs of student and teacher vectors. For each pair, we calculate three losses, which are then averaged to obtain the final loss.\n",
            "It is important to note that this stage achieves only a preliminary alignment between the text and image modalities, leaving significant room for im- provement. In future work, we aim to further ex- plore and refine the modality alignment process.\n",
            "\n",
            "## 3 Experiments\n",
            "\n",
            "\n",
            "## 3.1 Implementation details\n",
            "\n",
            "Our model named Jasper is initialized from stella_en_1.5B_v5 and google/siglip-so400m- patch14-384 (Zhai et al., 2023; Alabdulmohsin et al., 2024). stella_en_1.5B_v5 and NV-Embed-v2 are our teacher models. The total number of parameters in our Jasper model is 1.9B (stella 1.5B parameters and siglip 400M parameters). For hyperparameters, we set X1 = 10, 12 = 200, 13 = 20, margin = 0.015.\n",
            "In all four stages, the model is trained using 8 x RTX A6000 GPUs, with a maximum input length of 512 tokens, mixed precision training (BF16), DeepSpeed ZERO-stage-2, and the AdamW opti- mizer. During stage 1 (distillation training), the batch size is set to 128, the learning rate is 1e-4 per step, and the model checkpoint at step 4000 is selected as the final model. In the case of stage 2 (also distillation training), the batch size remains 128, the learning rate drops to 8e-5 per step, and the final model is the checkpoint at step 7000. For stage 3 (dimension reduction training), the batch size is again 128, the learning rate is adjusted to 7e- 5 per step, and the checkpoint at step 2200 serves as the final model. Lastly, in stage 4 (multimodal training), the batch size is reduced to 90, the learn- ing rate returns to 1e-4 per step, and the final model is chosen from the checkpoint at step 3500.\n",
            "\n",
            "8This refers to the dimensionality of the encoder layer's hidden state.\n",
            "\n",
            "Chunk 4:\n",
            "Table 1: MTEB Results as of December 24, 2024. We use the original model names on the leaderboard for clarity.\n",
            "\n",
            "\n",
            "<table>\n",
            "<tr>\n",
            "<th>Model</th>\n",
            "<th>Model Size</th>\n",
            "<th>Average(56 datasets)</th>\n",
            "<th>Classification</th>\n",
            "<th>Clustering</th>\n",
            "<th>PairClassification</th>\n",
            "<th>Reranking</th>\n",
            "<th>Retrieval</th>\n",
            "<th>STS</th>\n",
            "<th>Summarization</th>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>NV-Embed-v2</td>\n",
            "<td>7851M</td>\n",
            "<td>72.31</td>\n",
            "<td>90.37</td>\n",
            "<td>58.46</td>\n",
            "<td>88.67</td>\n",
            "<td>60.65</td>\n",
            "<td>62.65</td>\n",
            "<td>84.31</td>\n",
            "<td>30.7</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>bge-en-icl</td>\n",
            "<td>7111M</td>\n",
            "<td>71.67</td>\n",
            "<td>88.95</td>\n",
            "<td>57.89</td>\n",
            "<td>88.14</td>\n",
            "<td>59.86</td>\n",
            "<td>62.16</td>\n",
            "<td>84.24</td>\n",
            "<td>30.77</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Stella_en_1.5B_v5</td>\n",
            "<td>1543M</td>\n",
            "<td>71.19</td>\n",
            "<td>87.63</td>\n",
            "<td>57.69</td>\n",
            "<td>88.07</td>\n",
            "<td>61.21</td>\n",
            "<td>61.01</td>\n",
            "<td>84.51</td>\n",
            "<td>31.49</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>SFR-Embedding-2_R</td>\n",
            "<td>7111M</td>\n",
            "<td>70.31</td>\n",
            "<td>89.05</td>\n",
            "<td>56.17</td>\n",
            "<td>88.07</td>\n",
            "<td>60.14</td>\n",
            "<td>60.18</td>\n",
            "<td>81.26</td>\n",
            "<td>30.71</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>gte-Qwen2-1.5B-instruct</td>\n",
            "<td>1776M</td>\n",
            "<td>67.16</td>\n",
            "<td>82.47</td>\n",
            "<td>48.75</td>\n",
            "<td>87.51</td>\n",
            "<td>59.98</td>\n",
            "<td>58.29</td>\n",
            "<td>82.73</td>\n",
            "<td>31.17</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>voyage-lite-02-instruct</td>\n",
            "<td>1220M</td>\n",
            "<td>67.13</td>\n",
            "<td>79.25</td>\n",
            "<td>52.42</td>\n",
            "<td>86.87</td>\n",
            "<td>58.24</td>\n",
            "<td>56.60</td>\n",
            "<td>85.79</td>\n",
            "<td>31.01</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Jasper (our model)</td>\n",
            "<td>1543M+400M</td>\n",
            "<td>71.54</td>\n",
            "<td>88.49</td>\n",
            "<td>58.04</td>\n",
            "<td>88.07</td>\n",
            "<td>60.91</td>\n",
            "<td>61.33</td>\n",
            "<td>84.67</td>\n",
            "<td>31.42</td>\n",
            "</tr>\n",
            "</table>\n",
            "\n",
            "\n",
            "## 3.2 Datasets\n",
            "\n",
            "In stage 1, stage 2 and stage 3, we use fineweb-edu (Lozhkov et al., 2024) as our main text training dataset, which accounts for 80% of the full text data. The remaining 20% of the text data comes from sentence-transformers/embedding-training- data9. The reason we choose the sentence- transformers/embedding-training-data is that the majority of the fineweb-edu data consists of pas- sages. However, in addition to passages, we also require questions to enhance the diversity of our training data. The total amount of text training data is 8 million.\n",
            "For the documents in our dataset, we perform the following actions:\n",
            "1. We randomly select 30% of the documents and divide them into short texts, each consist- ing of 1 to 10 sentences.\n",
            "2. We randomly select 0.08% of the text and shuffle the words within it.\n",
            "In stage 4, we use the caption data of BAAI/Infinity-MM (Gu et al., 2024) as our vision training data.\n",
            "\n",
            "## 3.3 Results\n",
            "\n",
            "We evaluate the proposed Jasper and Stella models on the full MTEB benchmark, which encompasses 15 retrieval datasets, 4 reranking datasets, 12 clas- sification datasets, 11 clustering datasets, 3 pair classification datasets, 10 semantic textual similar- ity datasets, and 1 summarization dataset.\n",
            "Table 1 presents the average score of our Jasper model across the overall performance and seven subcategory tasks of the METB benchmark. We compare our model with other frontier models on the MTEB leaderboard, as well as those with fewer than 2B parameters. Experimental results demon- strate that our Jasper model significantly outper- forms other models with fewer than 2B parameters.\n",
            "Furthermore, despite having only 2B parameters, our model produces results that are comparable to those of models with 7B parameters.\n",
            "\n",
            "## 4 Discussion\n",
            "\n",
            "\n",
            "## 4.1 Instruction Robustness\n",
            "\n",
            "Instruction-based embedding models require an in- struction to be prepended to a query or passage dur- ing text encoding. Currently, many state-of-the-art text embedding models use instructions to prompt the model and obtain better embeddings. Similar to the usage of large language models (Zhao et al., 2024b), different tasks necessitate different instruc- tions, which is both logical and intuitive. Therefore, the ability to understand instructions is crucial for these text embedding models.\n",
            "Jasper is also an instruction-based embedding model. To demonstrate the impact of different prompts on the Jasper model, we conducted a sim- ple experiment. Specifically, we evaluated Jasper on some short evaluation tasks using similar in- structions generated by GPT-4o. Table 2 lists all the original and modified instructions. Based on the results shown in Table 3, we conclude that our Jasper model is robust to instructions and can accu- rately understand different instructions.\n",
            "\n",
            "## 4.2 Possible Improvements for Vision Encoding\n",
            "\n",
            "Due to time and resource constraints, we were only able to equip the Jasper model with a basic image encoding capability. Initially, stage 4 was envi- sioned as a fundamental visual-language alignment training phase, with a potential stage 5 involving contrastive learning utilizing a Visual Question An- swering (VQA) dataset. Additionally, we observed oscillatory behavior in our loss function during stage 4. Overall, there is considerable room for enhancement in the multimodal training.\n",
            "\n",
            "## 5 Conclusion\n",
            "\n",
            "In this paper, we present the distillation-based train- ing procedure for the Jasper model. We have\n",
            "\n",
            "9https://huggingface.co/datasets/sent ence-transformers/embedding-training-dat\n",
            "\n",
            "a\n",
            "\n",
            "Chunk 5:\n",
            "Table 2: Original instructions and corresponding synonyms.\n",
            "\n",
            "\n",
            "<table>\n",
            "<tr>\n",
            "<th>Original Instruction</th>\n",
            "<th>Synonym of Original Instruction</th>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classify the sentiment expressed in the given movie review text from the IMDB dataset</td>\n",
            "<td>Determine the sentiment conveyed in the provided movie review text from the IMDB dataset.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Identify the topic or theme of StackExchange posts based on the titles</td>\n",
            "<td>Determine the subject or theme of StackExchange posts based on the titles.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Given a news summary, retrieve other semantically similar summaries</td>\n",
            "<td>Given a news summary, find other summaries with similar meanings.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieve duplicate questions from StackOverflow forum</td>\n",
            "<td>Find duplicate questions on the StackOverflow forum.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Given a title of a scientific paper, retrieve the titles of other relevant papers</td>\n",
            "<td>Given the title of a scientific paper, find the titles of other related papers.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classify the sentiment of a given tweet as either positive, negative, or neutral</td>\n",
            "<td>Determine the sentiment of a given tweet as positive, negative, or neutral.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Given a claim, find documents that refute the claim</td>\n",
            "<td>Given a claim, locate documents that contradict the claim.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Given a question, retrieve relevant documents that best answer the question</td>\n",
            "<td>Given a question, find relevant documents that best answer it.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieve tweets that are semantically similar to the given tweet</td>\n",
            "<td>Find tweets that have similar meanings to the given tweet.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieve semantically similar text.</td>\n",
            "<td>Find text with similar meanings.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Identify the main category of Medrxiv papers based on the titles</td>\n",
            "<td>Determine the primary category of Medrxiv papers based on the titles.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieve duplicate questions from AskUbuntu forum</td>\n",
            "<td>Find duplicate questions on the AskUbuntu forum.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Given a question, retrieve detailed question descriptions from Stackexchange that are duplicates to the given question</td>\n",
            "<td>Given a question, find detailed question descriptions from Stackexchange that are duplicates.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Identify the main category of Biorxiv papers based on the titles and abstracts</td>\n",
            "<td>Determine the primary category of Biorxiv papers based on the titles and abstracts.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Given a financial question, retrieve user replies that best answer the question</td>\n",
            "<td>Given a financial question, find user replies that best answer it.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Given a online banking query, find the corresponding intents</td>\n",
            "<td>Given an online banking query, identify the corresponding intents.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Identify the topic or theme of the given news articles</td>\n",
            "<td>Determine the subject or theme of the given news articles.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classify the emotion expressed in the given Twitter message into one of the six emotions: anger, fear, joy, love, sadness, and surprise Given a user utterance as query, find the user intents</td>\n",
            "<td>Determine the emotion expressed in the given Twitter message as one of six emotions: anger, fear, joy, love, sadness, and surprise. Given a user utterance as a query, identify the user intents.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Identify the main category of Biorxiv papers based on the titles</td>\n",
            "<td>Determine the primary category of Biorxiv papers based on the titles.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classify the given Amazon review into its appropriate rating category</td>\n",
            "<td>Classify the given Amazon review into its appropriate rating category.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Given a scientific claim, retrieve documents that support or refute the claim</td>\n",
            "<td>Given a scientific claim, find documents that support or contradict the claim.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Identify the topic or theme of StackExchange posts based on the given paragraphs</td>\n",
            "<td>Determine the subject or theme of StackExchange posts based on the given paragraphs.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Given a scientific paper title, retrieve paper abstracts that are cited by the given paper</td>\n",
            "<td>Given a scientific paper title, find paper abstracts that are cited by the given paper.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classify the given comments as either toxic or not toxic</td>\n",
            "<td>Classify the given comments as toxic or non-toxic.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classify the intent domain of the given utterance in task-oriented conversation</td>\n",
            "<td>Determine the intent domain of the given utterance in task-oriented conversation.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieve duplicate questions from Sprint forum</td>\n",
            "<td>Find duplicate questions on the Sprint forum.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Given a user utterance as query, find the user scenarios</td>\n",
            "<td>Given a user utterance as a query, identify the user scenarios.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classify the intent of the given utterance in task-oriented conversation</td>\n",
            "<td>Determine the intent of the given utterance in task-oriented conversation.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classify a given Amazon customer review text as either counterfactual or not-counterfactual</td>\n",
            "<td>Classify a given Amazon customer review text as counterfactual or non-counterfactual.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Identify the main category of Medrxiv papers based on the titles and abstracts</td>\n",
            "<td>Determine the primary category of Medrxiv papers based on the titles and abstracts.</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Given a query on COVID-19, retrieve documents that answer the query</td>\n",
            "<td>Given a query on COVID-19, find documents that answer the query.</td>\n",
            "</tr>\n",
            "</table>\n",
            "\n",
            "Table 3: MTEB Results on different instructions.\n",
            "\n",
            "\n",
            "<table>\n",
            "<tr>\n",
            "<th>Task Type</th>\n",
            "<th>Task Name</th>\n",
            "<th>Original Score</th>\n",
            "<th>Score with Modified Instructions</th>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classification</td>\n",
            "<td>MTOPDomainClassification</td>\n",
            "<td>0.992</td>\n",
            "<td>0.992</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classification</td>\n",
            "<td>AmazonCounterfactual Classification</td>\n",
            "<td>0.958</td>\n",
            "<td>0.957</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classification</td>\n",
            "<td>TweetSentimentExtractionClassification</td>\n",
            "<td>0.773</td>\n",
            "<td>0.776</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classification</td>\n",
            "<td>EmotionClassification</td>\n",
            "<td>0.877</td>\n",
            "<td>0.859</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classification</td>\n",
            "<td>MassiveIntentClassification</td>\n",
            "<td>0.853</td>\n",
            "<td>0.854</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classification</td>\n",
            "<td>AmazonReviewsClassification</td>\n",
            "<td>0.629</td>\n",
            "<td>0.630</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classification</td>\n",
            "<td>MassiveScenarioClassification</td>\n",
            "<td>0.912</td>\n",
            "<td>0.912</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classification</td>\n",
            "<td>Banking77Classification</td>\n",
            "<td>0.873</td>\n",
            "<td>0.875</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classification</td>\n",
            "<td>ImdbClassification</td>\n",
            "<td>0.971</td>\n",
            "<td>0.971</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classification</td>\n",
            "<td>ToxicConversations Classification</td>\n",
            "<td>0.913</td>\n",
            "<td>0.910</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Classification</td>\n",
            "<td>MTOPIntentClassification</td>\n",
            "<td>0.915</td>\n",
            "<td>0.912</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Clustering</td>\n",
            "<td>MedrxivClusteringS2S</td>\n",
            "<td>0.448</td>\n",
            "<td>0.448</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Clustering</td>\n",
            "<td>StackExchangeClusteringP2P</td>\n",
            "<td>0.494</td>\n",
            "<td>0.492</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Clustering</td>\n",
            "<td>StackExchangeClustering</td>\n",
            "<td>0.800</td>\n",
            "<td>0.795</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Clustering</td>\n",
            "<td>TwentyNewsgroupsClustering</td>\n",
            "<td>0.630</td>\n",
            "<td>0.625</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Clustering</td>\n",
            "<td>MedrxivClustering P2P</td>\n",
            "<td>0.470</td>\n",
            "<td>0.468</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Clustering</td>\n",
            "<td>BiorxivClusteringS2S</td>\n",
            "<td>0.476</td>\n",
            "<td>0.475</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Clustering</td>\n",
            "<td>BiorxivClusteringP2P</td>\n",
            "<td>0.520</td>\n",
            "<td>0.518</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>PairClassification</td>\n",
            "<td>TwitterURLCorpus</td>\n",
            "<td>0.877</td>\n",
            "<td>0.877</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>PairClassification</td>\n",
            "<td>SprintDuplicateQuestions</td>\n",
            "<td>0.964</td>\n",
            "<td>0.964</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>PairClassification</td>\n",
            "<td>TwitterSemEval2015</td>\n",
            "<td>0.803</td>\n",
            "<td>0.801</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Reranking</td>\n",
            "<td>StackOverflowDupQuestions</td>\n",
            "<td>0.546</td>\n",
            "<td>0.548</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Reranking</td>\n",
            "<td>SeiDocsRR</td>\n",
            "<td>0.891</td>\n",
            "<td>0.890</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Reranking</td>\n",
            "<td>AskUbuntuDupQuestions</td>\n",
            "<td>0.674</td>\n",
            "<td>0.676</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>CQADupstackMathematicaRetrieval</td>\n",
            "<td>0.369</td>\n",
            "<td>0.370</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>CQADupstackStatsRetrieval</td>\n",
            "<td>0.413</td>\n",
            "<td>0.413</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>CQADupstack TexRetrieval</td>\n",
            "<td>0.362</td>\n",
            "<td>0.362</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>SCIDOCS</td>\n",
            "<td>0.247</td>\n",
            "<td>0.247</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>CQADupstackEnglishRetrieval</td>\n",
            "<td>0.543</td>\n",
            "<td>0.543</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>ArguAna</td>\n",
            "<td>0.653</td>\n",
            "<td>0.652</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>TRECCOVID</td>\n",
            "<td>0.865</td>\n",
            "<td>0.866</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>CQADupstackUnixRetrieval</td>\n",
            "<td>0.482</td>\n",
            "<td>0.482</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>CQADupstackGamingRetrieval</td>\n",
            "<td>0.632</td>\n",
            "<td>0.633</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>CQADupstackGisRetrieval</td>\n",
            "<td>0.444</td>\n",
            "<td>0.448</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>CQADupstack WordpressRetrieval</td>\n",
            "<td>0.388</td>\n",
            "<td>0.386</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>FIQA2018</td>\n",
            "<td>0.601</td>\n",
            "<td>0.601</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>SeiFact</td>\n",
            "<td>0.805</td>\n",
            "<td>0.805</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>CQADupstackPhysicsRetrieval</td>\n",
            "<td>0.549</td>\n",
            "<td>0.548</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>NFCorpus</td>\n",
            "<td>0.431</td>\n",
            "<td>0.431</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>CQADupstackProgrammersRetrieval</td>\n",
            "<td>0.505</td>\n",
            "<td>0.505</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>CQADupstackAndroidRetrieval</td>\n",
            "<td>0.571</td>\n",
            "<td>0.571</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Retrieval</td>\n",
            "<td>CQADupstack WebmastersRetrieval</td>\n",
            "<td>0.464</td>\n",
            "<td>0.464</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>STS</td>\n",
            "<td>BIOSSES</td>\n",
            "<td>0.848</td>\n",
            "<td>0.854</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>STS</td>\n",
            "<td>STS13</td>\n",
            "<td>0.897</td>\n",
            "<td>0.888</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>STS</td>\n",
            "<td>STS12</td>\n",
            "<td>0.803</td>\n",
            "<td>0.804</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>STS</td>\n",
            "<td>STSBenchmark</td>\n",
            "<td>0.888</td>\n",
            "<td>0.886</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>STS</td>\n",
            "<td>STS15</td>\n",
            "<td>0.902</td>\n",
            "<td>0.900</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>STS</td>\n",
            "<td>STS14</td>\n",
            "<td>0.853</td>\n",
            "<td>0.851</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>STS</td>\n",
            "<td>STS16</td>\n",
            "<td>0.864</td>\n",
            "<td>0.869</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>STS</td>\n",
            "<td>STS22</td>\n",
            "<td>0.672</td>\n",
            "<td>0.748</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>STS</td>\n",
            "<td>SICK-R</td>\n",
            "<td>0.822</td>\n",
            "<td>0.823</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>STS</td>\n",
            "<td>STS17</td>\n",
            "<td>0.911</td>\n",
            "<td>0.908</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Summarization</td>\n",
            "<td>SummEval</td>\n",
            "<td>0.313</td>\n",
            "<td>0.314</td>\n",
            "</tr>\n",
            "<tr>\n",
            "<td>Average Score</td>\n",
            "<td></td>\n",
            "<td>0.686</td>\n",
            "<td>0.687</td>\n",
            "</tr>\n",
            "</table>\n",
            "\n",
            "designed three loss functions to distill multiple large teacher embedding models into a student em- bedding model from diverse perspectives. Subse- quently, we utilized a MRL-based training method to reduce the vector dimensionality of the student model. Experimental results on the MTEB demon- strate that our Jasper model achieves state-of-the- art performance at the 2B parameter scale and ex- hibits comparable results to other top-ranked em- bedding models with 7B parameters. Future work will further explore the alignment between multiple modalities.\n",
            "\n",
            "## References\n",
            "\n",
            "Prabhat Agarwal, Minhazul Islam SK, Nikil Pancha, Kurchi Subhra Hazra, Jiajing Xu, and Chuck Rosen- berg. 2024. Omnisearchsage: Multi-task multi-entity embeddings for pinterest search. In Companion Pro- ceedings of the ACM on Web Conference 2024, WWW 2024, Singapore, Singapore, May 13-17, 2024, pages 121-130. ACM.\n",
            "Ibrahim Alabdulmohsin, Xiaohua Zhai, Alexander Kolesnikov, and Lucas Beyer. 2024. Getting vit in shape: Scaling laws for compute-optimal model de- sign.\n",
            "Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Qianyu Guo, Meng Wang, and Haofen Wang. 2023. Retrieval- augmented generation for large language models: A survey. CoRR, abs/2312.10997.\n",
            "Shuhao Gu, Jialing Zhang, Siyuan Zhou, Kevin Yu, Zhaohu Xing, Liangdong Wang, Zhou Cao, Jintao Jia, Zhuoyi Zhang, Yixuan Wang, Zhenchong Hu, Bo-Wen Zhang, Jijie Li, Dong Liang, Yingli Zhao, Yulong Ao, Yaoqi Liu, Fangxiang Feng, and Guang Liu. 2024. Infinity-mm: Scaling multimodal perfor- mance with large-scale and high-quality instruction data.\n",
            "Sebastian Hofstätter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin, and Allan Hanbury. 2021. Effi- ciently teaching an effective dense retriever with bal- anced topic aware sampling. In SIGIR '21: The 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual Event, Canada, July 11-15, 2021, pages 113-122. ACM.\n",
            "Abhinav Ramesh Kashyap, Thanh-Tung Nguyen, Vik- tor Schlegel, Stefan Winkler, See-Kiong Ng, and Soujanya Poria. 2024. A comprehensive survey of sentence representations: From the BERT epoch to the CHATGPT era and beyond. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics, EACL\n",
            "\n",
            "Chunk 6:\n",
            "2024 - Volume 1: Long Papers, St. Julian's, Malta, March 17-22, 2024, pages 1738-1751. Association for Computational Linguistics.\n",
            "Aditya Kusupati, Gantavya Bhatt, Aniket Rege, Matthew Wallingford, Aditya Sinha, Vivek Ramanu- jan, William Howard-Snyder, Kaifeng Chen, Sham Kakade, Prateek Jain, and Ali Farhadi. 2024. Ma- tryoshka representation learning.\n",
            "Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. 2024. Nv-embed: Improved techniques for training llms as generalist embedding models. arXiv preprint arXiv:2405.17428.\n",
            "Chaofan Li, MingHao Qin, Shitao Xiao, Jianlyu Chen, Kun Luo, Yingxia Shao, Defu Lian, and Zheng Liu. 2024. Making text embedders few-shot learners.\n",
            "Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. 2021. In-batch negatives for knowledge distillation with tightly-coupled teachers for dense retrieval. In Proceedings of the 6th Workshop on Representation Learning for NLP, RepLANLP@ACL-IJCNLP 2021, Online, August 6, 2021, pages 163-173. Association for Computational Linguistics.\n",
            "Anton Lozhkov, Loubna Ben Allal, Leandro von Werra, and Thomas Wolf. 2024. Fineweb-edu: the finest collection of educational content.\n",
            "Gabriel de Souza P Moreira, Radek Osmulski, Mengyao Xu, Ronay Ak, Benedikt Schifferer, and Even Oldridge. 2024. Nv-retriever: Improving text em- bedding models with effective hard-negative mining. arXiv preprint arXiv:2407.15831.\n",
            "Niklas Muennighoff, Nouamane Tazi, Loïc Magne, and Nils Reimers. 2023. MTEB: massive text embedding benchmark. In Proceedings of the 17th Conference of the European Chapter of the Association for Compu- tational Linguistics, EACL 2023, Dubrovnik, Croatia, May 2-6, 2023, pages 2006-2029. Association for Computational Linguistics.\n",
            "Xiaohua Wang, Zhenghua Wang, Xuan Gao, Feiran Zhang, Yixin Wu, Zhibo Xu, Tianyuan Shi, Zhengyuan Wang, Shizheng Li, Qi Qian, Ruicheng Yin, Changze Lv, Xiaoqing Zheng, and Xuanjing Huang. 2024. Searching for best practices in retrieval-augmented generation. In Proceedings of the 2024 Conference on Empirical Methods in Natu- ral Language Processing, EMNLP 2024, Miami, FL, USA, November 12-16, 2024, pages 17716-17736. Association for Computational Linguistics.\n",
            "Shitao Xiao, Zheng Liu, Peitian Zhang, and Niklas Muennighoff. 2023. C-pack: Packaged resources to advance general chinese embedding.\n",
            "Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. 2023. Sigmoid loss for language image pre-training.\n",
            "Wayne Xin Zhao, Jing Liu, Ruiyang Ren, and Ji-Rong Wen. 2024a. Dense text retrieval based on pretrained language models: A survey. ACM Trans. Inf. Syst., 42(4):89:1-89:60.\n",
            "Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2024b. A survey of large language models.\n",
            "Junjie Zhou, Zheng Liu, Shitao Xiao, Bo Zhao, and Yongping Xiong. 2024. VISTA: visualized text em- bedding for universal multi-modal retrieval. In Pro- ceedings of the 62nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11- 16, 2024, pages 3185-3200. Association for Compu- tational Linguistics.\n",
            "\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Create agent using OpenAI Agents SDK\n",
        "\n",
        "For this example, we're going to create two different agents to compare the effectiveness of the LLM when given the document as a PDF, versus when given the document as a set of structured data, markdown chunks, and complete document layout."
      ],
      "metadata": {
        "id": "7nWQst_9NPIr"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Step 1: Create a Basic Agent\n",
        "\n",
        "This agent will only reference the PDF document directly."
      ],
      "metadata": {
        "id": "ytGsz5bh_sG1"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def create_qa_agent_basic(document: str):\n",
        "    \"\"\"Create a Q&A agent specialized for research paper analysis.\"\"\"\n",
        "    return Agent(\n",
        "        name=\"Research Paper Q&A Basic Agent\",\n",
        "        instructions=f\"\"\"\n",
        "You are a knowledgeable and precise assistant designed to answer questions based on the content of academic research papers. Your goal is to help users understand and extract relevant insights from the document linked to below.\n",
        "\n",
        "Document to parse:\n",
        "{document}\n",
        "\n",
        "Capabilities:\n",
        "- Accurately summarize and interpret sections, tables, and figures.\n",
        "- Understand technical terminology, methodologies, and experimental setups.\n",
        "- Identify and explain findings, results, and conclusions.\n",
        "- Recognize document structure (abstract, introduction, methods, results, discussion, references).\n",
        "- Extract insights from equations, data, and complex diagrams when described.\n",
        "\n",
        "Guidelines:\n",
        "- Always ground your answers in the content of the document.\n",
        "- Use direct quotes or paraphrased explanations from the paper when helpful.\n",
        "- If a question cannot be answered from the document, clearly state that.\n",
        "- Be concise but informative. Use structured responses (e.g., bullet points or short summaries) when appropriate.\n",
        "\n",
        "Answer all questions as an expert reader of the paper, supporting your responses with references to the content where necessary.\n",
        "\"\"\")"
      ],
      "metadata": {
        "id": "DoHi2j6U0tgE"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Step 2: Create an Agent that leverages Tensorlake results\n",
        "\n",
        "This agent will only reference output from Tensorlake parsing the PDF, including structured data, markdown chunks, and a complete document layout."
      ],
      "metadata": {
        "id": "eY4zyyLq_0Ta"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def create_qa_agent(markdown_chunks: str, structured_data: str):\n",
        "    \"\"\"Create a Q&A agent specialized for research paper analysis.\"\"\"\n",
        "    return Agent(\n",
        "        name=\"Research Paper Q&A Agent\",\n",
        "        instructions=f\"\"\"\n",
        "You are a knowledgeable and precise assistant designed to answer questions based on the content of academic research papers. Your goal is to help users understand and extract relevant insights from the document provided below.\n",
        "Use both the markdown chunks and structured data as reference material.\n",
        "\n",
        "Markdown Chunks:\n",
        "{markdown_chunks}\n",
        "\n",
        "Structured Data:\n",
        "{structured_data}\n",
        "\n",
        "Capabilities:\n",
        "- Accurately summarize and interpret sections, tables, and figures.\n",
        "- Understand technical terminology, methodologies, and experimental setups.\n",
        "- Identify and explain findings, results, and conclusions.\n",
        "- Recognize document structure (abstract, introduction, methods, results, discussion, references).\n",
        "- Extract insights from equations, data, and complex diagrams when described.\n",
        "\n",
        "Guidelines:\n",
        "- Always ground your answers in the content of the document.\n",
        "- Use direct quotes or paraphrased explanations from the paper when helpful.\n",
        "- If a question cannot be answered from the document, clearly state that.\n",
        "- Be concise but informative. Use structured responses (e.g., bullet points or short summaries) when appropriate.\n",
        "\n",
        "Answer all questions as an expert reader of the paper, supporting your responses with references to the content where necessary.\n",
        "\"\"\")"
      ],
      "metadata": {
        "id": "wuepTVEI_FAN"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Step 3: Create a Comparison Agent\n",
        "\n",
        "This agent will compare the results from the two other agents and provide an analysis of what information may have been missed by only leveraging the PDF instead of the parsed Tensorlake output."
      ],
      "metadata": {
        "id": "HTJxJ0fk_-8f"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def compare_results(basic_results: str, results: str):\n",
        "    \"\"\"Create a result comparitor for basic and advanced agent results\"\"\"\n",
        "    return Agent(\n",
        "        name=\"Research Paper Result Comparitor Agent\",\n",
        "        instructions=f\"\"\"\n",
        "You are a knowledgeable and precise assistant designed to compare the results from an agent that answered the questions provided based on the content of academic research papers. Your goal is to help users understand which results are more complete and accurate based on the questions and two different outputs below.\n",
        "\n",
        "Basic Agent Results:\n",
        "{basic_results}\n",
        "\n",
        "Advanced Agent Results:\n",
        "{results}\n",
        "\n",
        "Capabilities:\n",
        "- Accurately determine which results are more complete and accurate.\n",
        "- Compare the accuracy of the results\n",
        "\n",
        "Guidelines:\n",
        "- Always ground your answers in the content of the results.\n",
        "- Use direct quotes or paraphrased explanations from the results when helpful.\n",
        "- Be concise but informative. Use structured responses (e.g., bullet points or short summaries) when appropriate.\n",
        "\n",
        "Provide a concise summary of which results are more complete and accurate.\n",
        "\"\"\")"
      ],
      "metadata": {
        "id": "XJfrMzhC1QUo"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Step 4: Run and Test the Agent\n",
        "\n",
        "You can ask questions about the document in natural language and get detailed answers."
      ],
      "metadata": {
        "id": "efRslF3ZNVzB"
      }
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "npIeT2XJE0u3",
        "outputId": "32e74631-4a35-4abd-c8a8-5983bb6e89e4"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            " ### Summary Comparison\n",
            "\n",
            "#### Paper Overview\n",
            "\n",
            "- **Basic Agent Results**:\n",
            "  - Focuses on the distillation of large embedding models, specifically Jasper and Stella.\n",
            "  - Aims at efficient distillation for deployment on resource-limited devices.\n",
            "\n",
            "- **Advanced Agent Results**:\n",
            "  - Describes a multi-stage distillation framework for reducing model size while maintaining performance, focusing on dense retrieval applications.\n",
            "  - Discusses the Jasper model built on the Stella model, achieving competitive performance on the Massive Text Embedding Benchmark (MTEB).\n",
            "\n",
            "**More Complete**: Advanced Agent Results provide a more detailed context, including specific applications and performance benchmarks.\n",
            "\n",
            "#### Key Points\n",
            "\n",
            "- **Basic Agent Results**:\n",
            "  - Emphasizes model size reduction and performance trade-offs.\n",
            "  - Discusses use cases for real-time applications.\n",
            "\n",
            "- **Advanced Agent Results**:\n",
            "  - Details a multi-stage distillation framework with custom loss functions.\n",
            "  - Introduces Matryoshka Representation Learning (MRL) for dimensionality reduction.\n",
            "  - Highlights multimodal potential with a vision encoder.\n",
            "\n",
            "**More Complete**: Advanced Agent Results offer in-depth explanations of the distillation process and additional innovations like MRL and multimodal capabilities.\n",
            "\n",
            "#### Architecture of Jasper\n",
            "\n",
            "- **Basic Agent Results**:\n",
            "  - Describes convolutional layers, block repetition, residual connections, and design philosophy optimizing for speed and scalability.\n",
            "\n",
            "- **Advanced Agent Results**:\n",
            "  - Details components like language and vision encoders, pooler, and fully connected layers.\n",
            "  - Explains how these components align textual and visual embeddings for multimodal capabilities.\n",
            "\n",
            "**More Complete**: Advanced Agent Results provide a comprehensive breakdown of the Jasper architecture, including its multimodal features.\n",
            "\n",
            "### Overall Conclusion\n",
            "\n",
            "The **Advanced Agent Results** are more complete and accurate, offering a rich understanding of the paper's objectives, innovations, and detailed architecture of the Jasper model.\n"
          ]
        }
      ],
      "source": [
        "# pass the extracted chunks to the agent\n",
        "markdown_chunks = \"\"\n",
        "for chunk in result.chunks:\n",
        "    markdown_chunks += chunk.content + \"\\n\\n\"\n",
        "\n",
        "# pass the structured data to the agent\n",
        "structured_data = json.dumps(result.structured_data[0].data, indent=2)\n",
        "\n",
        "# Ask questions\n",
        "questions = ''.join([\n",
        "    \"What is the paper about? What are the key points from the paper that we can further leverage?\",\n",
        "    \"Describe the architecture of Jasper. How is it structured?\"\n",
        "])\n",
        "\n",
        "# Create Q&A agent\n",
        "basic_start_time = time.time()\n",
        "basic_agent = create_qa_agent_basic(\"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/Jasper%20and%20Stells-%20distillation%20of%20SOTA%20embedding%20models.pdf\")\n",
        "basic_end_time = time.time()\n",
        "\n",
        "advanced_start_time = time.time()\n",
        "agent = create_qa_agent(markdown_chunks, structured_data)\n",
        "advanced_end_time = time.time()\n",
        "\n",
        "print(f\"Basic Agent took {basic_end_time - basic_start_time} seconds to run\")\n",
        "print(f\"Advanced Agent took {advanced_end_time - advanced_start_time} seconds to run\")\n",
        "print(f\"The Advanced Agent is {(advanced_end_time - advanced_start_time) / (basic_end_time - basic_start_time)} times faster\")\n",
        "\n",
        "basic_result = await Runner.run(basic_agent, questions)\n",
        "advanced_result = await Runner.run(agent, questions)\n",
        "\n",
        "\n",
        "comparitor_agent = compare_results(basic_result, advanced_result)\n",
        "comparison_result = await Runner.run(comparitor_agent, questions)\n",
        "\n",
        "\n",
        "print(f\" {comparison_result.final_output}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Next Steps\n",
        "\n",
        "Now that you have the basics down, check out one of these other resources to dive deeper into document parsing with Tensorlake:\n",
        "- [Python SDK and API Docs](https://docs.tensorlake.ai/)\n",
        "- [Blog](https://tensorlake.ai/blog)\n",
        "- [YouTube Channel](https://tensorlake.ai/blog)\n",
        "- [Community Slack](https://tensorlakecloud.slack.com/)"
      ],
      "metadata": {
        "id": "xvTNu4MXAZMa"
      }
    }
  ],
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}