---
title: Embed Role
description: Embed model role
keywords: [embedding, model, role, embeddings]
sidebar_position: 5
---

An "embeddings model" is trained to convert a piece of text into a vector, which can later be rapidly compared to other vectors to determine similarity between the pieces of text. Embeddings models are typically much smaller than LLMs, and will be extremely fast and cheap in comparison.

In Continue, embeddings are generated during indexing and then used by [codebase awareness](/guides/codebase-documentation-awareness) to perform similarity search over your codebase.

You can add `embed` to a model's `roles` to specify that it can be used to embed.

<Info>
  [Built-in model (VS Code only)] `transformers.js` is used as a built-in
  embeddings model in VS Code. In JetBrains, there currently is no built-in
  embedder.
</Info>

## Recommended embedding models

<Info>
  See our [comprehensive model recommendations](/customize/models#recommended-models) for the best embedding models comparison.
</Info>

If you have the ability to use any model, we recommend `voyage-code-3`, which is listed below along with the rest of the options for embeddings models.

If you want to generate embeddings locally, we recommend using `nomic-embed-text` with [Ollama](../model-providers/top-level/ollama#embeddings-model).

### Voyage AI

After obtaining an API key from [here](https://www.voyageai.com/), you can configure like this:

<Tabs>
  <Tab title="Hub">
  [Voyage Code 3 Embedder Block](https://hub.continue.dev/voyageai/voyage-code-3)
  </Tab>
    <Tab title="YAML">
    ```yaml title="config.yaml"
    name: My Config
    version: 0.0.1
    schema: v1

    models:
      - name: Voyage Code 3
        provider: voyage
        model: voyage-code-3
        apiKey: <YOUR_VOYAGE_API_KEY>
        roles: 
          - embed
    ```
    </Tab>
    <Tab title="JSON">
    ```json title="config.json"
    {
      "embeddingsProvider": {
        "provider": "voyage",
        "model": "voyage-code-3",
        "apiKey": "<YOUR_VOYAGE_API_KEY>"
      }
    }
    ```
    </Tab>
</Tabs>

### Ollama

See [here](../model-providers/top-level/ollama#embeddings-model) for instructions on how to use Ollama for embeddings.

### Transformers.js (currently VS Code only)

[Transformers.js](https://huggingface.co/docs/transformers.js/index) is a JavaScript port of the popular [Transformers](https://huggingface.co/transformers/) library. It allows embeddings to be calculated entirely locally. The model used is `all-MiniLM-L6-v2`, which is shipped alongside the Continue extension.

<Tabs>
    <Tab title="YAML">
    ```yaml title="config.yaml"
    name: My Config
    version: 0.0.1
    schema: v1

    models:
      - name: default-transformers
        provider: transformers.js
        roles:
          - embed
    ```
    </Tab>
    <Tab title="JSON">
    ```json title="config.json"
    {
      "embeddingsProvider": {
        "provider": "transformers.js"
      }
    }
    ```
    </Tab>
</Tabs>

### Text Embeddings Inference

[Hugging Face Text Embeddings Inference](https://huggingface.co/docs/text-embeddings-inference/en/index) enables you to host your own embeddings endpoint. You can configure embeddings to use your endpoint as follows:

<Tabs>
  {/* HUB_TODO nonexistent block */}
  {/* <Tab title="Hub">
  [HuggingFace Text Embedder Block](https://hub.continue.dev/)
  </Tab> */}
  <Tab title="YAML">
  ```yaml title="config.yaml"
  name: My Config
  version: 0.0.1
  schema: v1

  models:
    - name: Huggingface TEI Embedder
      provider: huggingface-tei
      apiBase: http://localhost:8080
      apiKey: <YOUR_TEI_API_KEY>
      roles: [embed]
  ```
  </Tab>
  <Tab title="JSON">
  ```json title="config.json"
  {
    "embeddingsProvider": {
      "provider": "huggingface-tei",
      "apiBase": "http://localhost:8080",
      "apiKey": "<YOUR_TEI_API_KEY>"
    }
  }
  ```
  </Tab>
</Tabs>

### OpenAI

See [here](../model-providers/top-level/openai#how-to-configure-openai-embeddings-models) for instructions on how to use OpenAI for embeddings.

### Cohere

See [here](../model-providers/more/cohere#embeddings-model) for instructions on how to use Cohere for embeddings.

### Gemini

See [here](../model-providers/top-level/gemini#how-to-configure-gemini-embeddings-models) for instructions on how to use Gemini for embeddings.

### Vertex

See [here](../model-providers/top-level/vertexai#how-to-configure-vertex-ai-embeddings-models) for instructions on how to use Vertex for embeddings.

### Mistral

See [here](../model-providers/more/mistral#how-to-configure-mistral-embeddings-models) for instructions on how to use Mistral for embeddings.

### NVIDIA

See [here](../model-providers/more/nvidia#embeddings-model) for instructions on how to use NVIDIA for embeddings.

### Bedrock

See [here](../model-providers/top-level/bedrock#how-to-configure-amazon-bedrock-embeddings-models) for instructions on how to use Bedrock for embeddings.

### WatsonX

See [here](../model-providers/more/watsonx#embeddings-model) for instructions on how to use WatsonX for embeddings.

### LMStudio

See [here](../model-providers/top-level/lmstudio#embeddings-model) for instructions on how to use LMStudio for embeddings.
