---
title: 'Embedding Models for RAG'
sidebarTitle: 'Overview'
---

Embedding models are used to convert text into vector embeddings.
These embeddings can be used to perform various tasks like similarity search, clustering, and classification.
In the context of RAG, embedding models are used to convert the input text into embeddings that are used to retrieve relevant (similar)
documents from the document store.

<table>
  <tr>
    <th>Vendor(s)</th>
    <th>Model</th>
    <th>dimensions</th>
    <th>max tokens</th>
    <th>cost</th>
    <th>MTEB avg score</th>
    <th>similarity metric</th>
  </tr>
  <tr>
    <td rowSpan="2">
      <a href="./open_ai">OpenAI</a>
    </td>
    <td>text-embedding-3-small</td>
    <td>1536 (scales down)</td>
    <td>8191</td>
    <td>$0.02 / 1M tokens</td>
    <td>62.3</td>
    <td>cosine, dot product, L2</td>
  </tr>
  <tr>
    <td>text-embedding-3-large</td>
    <td>3072 (scales down)</td>
    <td>8191</td>
    <td>$0.13 / 1M tokens</td>
    <td>64.6</td>
    <td>cosine, dot product, L2</td>
  </tr>
  <tr>
    <td>
      <a href="./google">Google</a>
    </td>
    <td>text-embedding-preview-0409 / text-embedding-0004</td>
    <td>768 (scales down)</td>
    <td>2048</td>
    <td> $0.025/1M tokens in Vertex, free in Gemini</td>
    <td>66.31</td>
    <td>cosine, L2</td>
  </tr>
  <tr>
    <td rowSpan="2">
      <a href="./fireworks">Fireworks</a>
    </td>
    <td>thenlper/gte-large</td>
    <td>1024</td>
    <td>512</td>
    <td>$0.016 / 1M tokens</td>
    <td>63.23</td>
    <td>cosine</td>
  </tr>
  <tr>
    <td>nomic-ai/nomic-embed-text-v1.5</td>
    <td>768 (scales down)</td>
    <td>8192</td>
    <td>$0.008 / 1M tokens</td>
    <td>62.28</td>
    <td>cosine</td>
  </tr>
  <tr>
    <td>
      <a href="./deepinfra">DeepInfra</a>
    </td>
    <td>gte-large</td>
    <td>1024</td>
    <td>512</td>
    <td>$0.010 / 1M tokens</td>
    <td>63.23</td>
    <td>cosine</td>
  </tr>
  <tr>
    <td>
      <a href="./cohere">Cohere</a>
    </td>
    <td>embed-english-v3.0</td>
    <td>1024</td>
    <td>512</td>
    <td>$0.10 / 1M Tokens</td>
    <td>64.5</td>
    <td>cosine</td>
  </tr>
  <tr>
    <td rowSpan="4">
      <a href="./voyage">Voyage</a>
    </td>
    <td>voyage-large-2-instruct</td>
    <td>1024</td>
    <td>16000</td>
    <td>$0.12 / 1M tokens</td>
    <td>68.28</td>
    <td>cosine, dot product, L2</td>
  </tr>
  <tr>
    <td>voyage-2</td>
    <td>1024</td>
    <td>4000</td>
    <td> $0.1/ 1M tokens</td>
    <td></td>
    <td>cosine, dot product, L2</td>
  </tr>
  <tr>
    <td>voyage-code-2</td>
    <td>1536</td>
    <td>16000</td>
    <td> $0.12/ 1M tokens</td>
    <td></td>
    <td>cosine, dot product, L2</td>
  </tr>
  <tr>
    <td>voyage-law-2</td>
    <td>1024</td>
    <td>16000</td>
    <td> $0.12/ 1M tokens</td>
    <td></td>
    <td>cosine, dot product, L2</td>
  </tr>
</table>

## Explanation of columns

- **Vendor(s)**: The vendor(s) that provide the model as a service.
- **Model**: The name of the model.
- **dimensions**: The number of dimensions in the vector embeddings that the model generates
- **max tokens**: The maximum number of tokens that can be passed to the model in a single request
- **cost**: The cost of using the model (based on vendor pricing page, where available)
- **MTEB avg score**: The [Massive Text Embedding Benchmark (MTEB)](https://github.com/embeddings-benchmark/mteb) average score. MTEB is a benchmark for evaluating the quality of embeddings across a range of tasks. The higher the score, the better the embeddings.
- **similarity metric**: The similarity metric recommended by the model authors to use with the embeddings. We only included the metrics supported by `pg_vector`, some of the models may support additional metrics.
