---
title: "Embedding models"
---

## Overview

<Note>
This overview covers **text-based embedding models**. LangChain does not currently support multimodal embeddings.
</Note>

Embedding models transform raw text—such as a sentence, paragraph, or tweet—into a fixed-length vector of numbers that captures its **semantic meaning**. These vectors allow machines to compare and search text based on meaning rather than exact words.

In practice, this means that texts with similar ideas are placed close together in the vector space. For example, instead of matching only the phrase *"machine learning"*, embeddings can surface documents that discuss related concepts even when different wording is used.

### How it works

1. **Vectorization** — The model encodes each input string as a high-dimensional vector.
2. **Similarity scoring** — Vectors are compared using mathematical metrics to measure how closely related the underlying texts are.

### Similarity metrics

Several metrics are commonly used to compare embeddings:

* **Cosine similarity** — measures the angle between two vectors.
* **Euclidean distance** — measures the straight-line distance between points.
* **Dot product** — measures how much one vector projects onto another.

Here's an example of computing cosine similarity between two vectors:

```python
import numpy as np

def cosine_similarity(vec1, vec2):
    dot = np.dot(vec1, vec2)
    return dot / (np.linalg.norm(vec1) * np.linalg.norm(vec2))

similarity = cosine_similarity(query_embedding, document_embedding)
print("Cosine Similarity:", similarity)
```

## Embedding Interface in LangChain

LangChain provides a standard interface for text embedding models (e.g., OpenAI, Cohere, Hugging Face) via the [Embeddings](
https://python.langchain.com/api_reference/core/embeddings/langchain_core.embeddings.embeddings.Embeddings.html#langchain_core.embeddings.embeddings.Embeddings
) interface.

Two main methods are available:

* `embed_documents(texts: List[str]) → List[List[float]]`: Embeds a list of documents.
* `embed_query(text: str) → List[float]`: Embeds a single query.

<Note>
The interface allows queries and documents to be embedded with different strategies, though most providers handle them the same way in practice.
</Note>

## Top integrations

<Tabs>
<Tab title="OpenAI">

<CodeGroup>
```bash pip
pip install -qU langchain-openai
```

```bash uv
uv add langchain-openai
```
</CodeGroup>
```python
import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
  os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")

from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
```
</Tab>
<Tab title="Azure">
```bash
pip install -qU "langchain[azure]"
```
```python
import getpass
import os

if not os.environ.get("AZURE_OPENAI_API_KEY"):
  os.environ["AZURE_OPENAI_API_KEY"] = getpass.getpass("Enter API key for Azure: ")

from langchain_openai import AzureOpenAIEmbeddings

embeddings = AzureOpenAIEmbeddings(
    azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
    azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
    openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
)
```
</Tab>
<Tab title="Google Gemini">
```bash
pip install -qU langchain-google-genai
```
```python
import getpass
import os

if not os.environ.get("GOOGLE_API_KEY"):
  os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter API key for Google Gemini: ")

from langchain_google_genai import GoogleGenerativeAIEmbeddings

embeddings = GoogleGenerativeAIEmbeddings(model="models/gemini-embedding-001")
```
</Tab>
<Tab title="Google Vertex">
```bash
pip install -qU langchain-google-vertexai
```
```python
from langchain_google_vertexai import VertexAIEmbeddings

embeddings = VertexAIEmbeddings(model="text-embedding-005")
```
</Tab>
<Tab title="AWS">
```bash
pip install -qU langchain-aws
```
```python
from langchain_aws import BedrockEmbeddings

embeddings = BedrockEmbeddings(model_id="amazon.titan-embed-text-v2:0")
```
</Tab>
<Tab title="HuggingFace">
```bash
pip install -qU langchain-huggingface
```
```python
from langchain_huggingface import HuggingFaceEmbeddings

embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
```
</Tab>
<Tab title="Ollama">
```bash
pip install -qU langchain-ollama
```
```python
from langchain_ollama import OllamaEmbeddings

embeddings = OllamaEmbeddings(model="llama3")
```
</Tab>
<Tab title="Cohere">
```bash
pip install -qU langchain-cohere
```
```python
import getpass
import os

if not os.environ.get("COHERE_API_KEY"):
  os.environ["COHERE_API_KEY"] = getpass.getpass("Enter API key for Cohere: ")

from langchain_cohere import CohereEmbeddings

embeddings = CohereEmbeddings(model="embed-english-v3.0")
```
</Tab>
<Tab title="Mistral AI">
```bash
pip install -qU langchain-mistralai
```
```python
import getpass
import os

if not os.environ.get("MISTRALAI_API_KEY"):
  os.environ["MISTRALAI_API_KEY"] = getpass.getpass("Enter API key for MistralAI: ")

from langchain_mistralai import MistralAIEmbeddings

embeddings = MistralAIEmbeddings(model="mistral-embed")
```
</Tab>
<Tab title="Nomic">
```bash
pip install -qU langchain-nomic
```
```python
import getpass
import os

if not os.environ.get("NOMIC_API_KEY"):
  os.environ["NOMIC_API_KEY"] = getpass.getpass("Enter API key for Nomic: ")

from langchain_nomic import NomicEmbeddings

embeddings = NomicEmbeddings(model="nomic-embed-text-v1.5")
```
</Tab>
<Tab title="NVIDIA">
```bash
pip install -qU langchain-nvidia-ai-endpoints
```
```python
import getpass
import os

if not os.environ.get("NVIDIA_API_KEY"):
  os.environ["NVIDIA_API_KEY"] = getpass.getpass("Enter API key for NVIDIA: ")

from langchain_nvidia_ai_endpoints import NVIDIAEmbeddings

embeddings = NVIDIAEmbeddings(model="NV-Embed-QA")
```
</Tab>
<Tab title="Voyage AI">
```bash
pip install -qU langchain-voyageai
```
```python
import getpass
import os

if not os.environ.get("VOYAGE_API_KEY"):
  os.environ["VOYAGE_API_KEY"] = getpass.getpass("Enter API key for Voyage AI: ")

from langchain-voyageai import VoyageAIEmbeddings

embeddings = VoyageAIEmbeddings(model="voyage-3")
```
</Tab>
<Tab title="IBM watsonx">
```bash
pip install -qU langchain-ibm
```
```python
import getpass
import os

if not os.environ.get("WATSONX_APIKEY"):
  os.environ["WATSONX_APIKEY"] = getpass.getpass("Enter API key for IBM watsonx: ")

from langchain_ibm import WatsonxEmbeddings

embeddings = WatsonxEmbeddings(
    model_id="ibm/slate-125m-english-rtrvr",
    url="https://us-south.ml.cloud.ibm.com",
    project_id="<WATSONX PROJECT_ID>",
)
```
</Tab>
<Tab title="Fake">
```bash
pip install -qU langchain-core
```
```python
from langchain_core.embeddings import DeterministicFakeEmbedding

embeddings = DeterministicFakeEmbedding(size=4096)
```
</Tab>
<Tab title="xAI">
```bash
pip install -qU "langchain[langchain-xai]"
```
```python
import getpass
import os

if not os.environ.get("XAI_API_KEY"):
  os.environ["XAI_API_KEY"] = getpass.getpass("Enter API key for xAI: ")

from langchain.chat_models import init_chat_model

model = init_chat_model("grok-2", model_provider="xai")
```
</Tab>
<Tab title="Perplexity">
```bash
pip install -qU "langchain[langchain-perplexity]"
```
```python
import getpass
import os

if not os.environ.get("PPLX_API_KEY"):
  os.environ["PPLX_API_KEY"] = getpass.getpass("Enter API key for Perplexity: ")

from langchain.chat_models import init_chat_model

model = init_chat_model("llama-3.1-sonar-small-128k-online", model_provider="perplexity")
```
</Tab>
<Tab title="DeepSeek">
```bash
pip install -qU "langchain[langchain-deepseek]"
```
```python
import getpass
import os

if not os.environ.get("DEEPSEEK_API_KEY"):
  os.environ["DEEPSEEK_API_KEY"] = getpass.getpass("Enter API key for DeepSeek: ")

from langchain.chat_models import init_chat_model

model = init_chat_model("deepseek-chat", model_provider="deepseek")
```
</Tab>
</Tabs>

```python
embeddings.embed_query("Hello, world!")
```

| Provider                                                               | Package                                                                                                                                                          |
|------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [AzureOpenAI](/oss/integrations/text_embedding/azureopenai)            | [langchain-openai](https://python.langchain.com/api_reference/openai/embeddings/langchain_openai.embeddings.azure.AzureOpenAIEmbeddings.html)                    |
| [Ollama](/oss/integrations/text_embedding/ollama)                      | [langchain-ollama](https://python.langchain.com/api_reference/ollama/embeddings/langchain_ollama.embeddings.OllamaEmbeddings.html)                               |
| [Fake](/oss/integrations/text_embedding/fake)                          | [langchain-core](https://python.langchain.com/api_reference/core/embeddings/langchain_core.embeddings.fake.FakeEmbeddings.html)                                  |
| [OpenAI](/oss/integrations/text_embedding/openai)                      | [langchain-openai](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html)                              |
| [Google Gemini](/oss/integrations/text_embedding/google_generative_ai) | [langchain-google-genai](https://python.langchain.com/api_reference/google_genai/embeddings/langchain_google_genai.embeddings.GoogleGenerativeAIEmbeddings.html) |
| [Together](/oss/integrations/text_embedding/together)                  | [langchain-together](https://python.langchain.com/api_reference/together/embeddings/langchain_together.embeddings.TogetherEmbeddings.html)                       |
| [Fireworks](/oss/integrations/text_embedding/fireworks)                | [langchain-fireworks](https://python.langchain.com/api_reference/fireworks/embeddings/langchain_fireworks.embeddings.FireworksEmbeddings.html)                   |
| [MistralAI](/oss/integrations/text_embedding/mistralai)                | [langchain-mistralai](https://python.langchain.com/api_reference/mistralai/embeddings/langchain_mistralai.embeddings.MistralAIEmbeddings.html)                   |
| [Cohere](/oss/integrations/text_embedding/cohere)                      | [langchain-cohere](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.cohere.Cohere.html)                                        |
| [Nomic](/oss/integrations/text_embedding/nomic)                        | [langchain-nomic](https://python.langchain.com/api_reference/nomic/embeddings/langchain_nomic.embeddings.NomicEmbeddings.html)                                   |
| [Databricks](/oss/integrations/text_embedding/databricks)              | [databricks-langchain](https://api-docs.databricks.com/python/databricks-ai-bridge/latest/databricks_langchain.html#databricks_langchain.DatabricksEmbeddings)   |
| [IBM](/oss/integrations/text_embedding/ibm_watsonx)                    | [langchain-ibm](https://python.langchain.com/api_reference/ibm/embeddings/langchain_ibm.embeddings.WatsonxEmbeddings.html)                                       |
| [NVIDIA](/oss/integrations/text_embedding/nvidia_ai_endpoints)         | [langchain-nvidia](https://python.langchain.com/api_reference/nvidia_ai_endpoints/embeddings/langchain_nvidia_ai_endpoints.embeddings.NVIDIAEmbeddings.html)     |

# Caching

Embeddings can be stored or temporarily cached to avoid needing to recompute them.

Caching embeddings can be done using a `CacheBackedEmbeddings`. This wrapper stores embeddings in a key-value store, where the text is hashed and the hash is used as the key in the cache.

The main supported way to initialize a `CacheBackedEmbeddings` is `from_bytes_store`. It takes the following parameters:

- **underlying_embedder**: The embedder to use for embedding.
- **document_embedding_cache**: Any [`ByteStore`](/oss/integrations/stores/) for caching document embeddings.
- **batch_size**: (optional, defaults to `None`) The number of documents to embed between store updates.
- **namespace**: (optional, defaults to `""`) The namespace to use for the document cache. Helps avoid collisions (e.g., set it to the embedding model name).
- **query_embedding_cache**: (optional, defaults to `None`) A [`ByteStore`](/oss/integrations/stores/) for caching query embeddings, or `True` to reuse the same store as `document_embedding_cache`.

<Important>
- Always set the `namespace` parameter to avoid collisions when using different embedding models.
- `CacheBackedEmbeddings` does not cache query embeddings by default. To enable this, specify a `query_embedding_cache`.
</Important>

```python
import time
from langchain.embeddings import CacheBackedEmbeddings  # [!code highlight]
from langchain.storage import LocalFileStore # [!code highlight]
from langchain_core.vectorstores import InMemoryVectorStore

# Create your underlying embeddings model
underlying_embeddings = ... # e.g., OpenAIEmbeddings(), HuggingFaceEmbeddings(), etc.

# Store persists embeddings to the local filesystem
# This isn't for production use, but is useful for local
store = LocalFileStore("./cache/") # [!code highlight]

cached_embedder = CacheBackedEmbeddings.from_bytes_store(
    underlying_embeddings,
    store,
    namespace=underlying_embeddings.model
)

# Example: caching a query embedding
tic = time.time()
print(cached_embedder.embed_query("Hello, world!"))
print(f"First call took: {time.time() - tic:.2f} seconds")

# Subsequent calls use the cache
tic = time.time()
print(cached_embedder.embed_query("Hello, world!"))
print(f"Second call took: {time.time() - tic:.2f} seconds")
```

In production, you would typically use a more robust persistent store, such as a database or cloud storage. Please see [stores integrations](/oss/integrations/stores/) for options.

## All integrations

<Columns cols={3}>
<Card title="Aleph Alpha" icon="link" href="/oss/integrations/text_embedding/aleph_alpha" arrow="true" cta="View guide"></Card>
<Card title="Anyscale" icon="link" href="/oss/integrations/text_embedding/anyscale" arrow="true" cta="View guide"></Card>
<Card title="Ascend" icon="link" href="/oss/integrations/text_embedding/ascend" arrow="true" cta="View guide"></Card>
<Card title="AwaDB" icon="link" href="/oss/integrations/text_embedding/awadb" arrow="true" cta="View guide"></Card>
<Card title="AzureOpenAI" icon="link" href="/oss/integrations/text_embedding/azureopenai" arrow="true" cta="View guide"></Card>
<Card title="Baichuan Text Embeddings" icon="link" href="/oss/integrations/text_embedding/baichuan" arrow="true" cta="View guide"></Card>
<Card title="Baidu Qianfan" icon="link" href="/oss/integrations/text_embedding/baidu_qianfan_endpoint" arrow="true" cta="View guide"></Card>
<Card title="Bedrock" icon="link" href="/oss/integrations/text_embedding/bedrock" arrow="true" cta="View guide"></Card>
<Card title="BGE on Hugging Face" icon="link" href="/oss/integrations/text_embedding/bge_huggingface" arrow="true" cta="View guide"></Card>
<Card title="Bookend AI" icon="link" href="/oss/integrations/text_embedding/bookend" arrow="true" cta="View guide"></Card>
<Card title="Clarifai" icon="link" href="/oss/integrations/text_embedding/clarifai" arrow="true" cta="View guide"></Card>
<Card title="Cloudflare Workers AI" icon="link" href="/oss/integrations/text_embedding/cloudflare_workersai" arrow="true" cta="View guide"></Card>
<Card title="Clova Embeddings" icon="link" href="/oss/integrations/text_embedding/clova" arrow="true" cta="View guide"></Card>
<Card title="Cohere" icon="link" href="/oss/integrations/text_embedding/cohere" arrow="true" cta="View guide"></Card>
<Card title="DashScope" icon="link" href="/oss/integrations/text_embedding/dashscope" arrow="true" cta="View guide"></Card>
<Card title="Databricks" icon="link" href="/oss/integrations/text_embedding/databricks" arrow="true" cta="View guide"></Card>
<Card title="DeepInfra" icon="link" href="/oss/integrations/text_embedding/deepinfra" arrow="true" cta="View guide"></Card>
<Card title="EDEN AI" icon="link" href="/oss/integrations/text_embedding/edenai" arrow="true" cta="View guide"></Card>
<Card title="Elasticsearch" icon="link" href="/oss/integrations/text_embedding/elasticsearch" arrow="true" cta="View guide"></Card>
<Card title="Embaas" icon="link" href="/oss/integrations/text_embedding/embaas" arrow="true" cta="View guide"></Card>
<Card title="Fake Embeddings" icon="link" href="/oss/integrations/text_embedding/fake" arrow="true" cta="View guide"></Card>
<Card title="FastEmbed by Qdrant" icon="link" href="/oss/integrations/text_embedding/fastembed" arrow="true" cta="View guide"></Card>
<Card title="Fireworks" icon="link" href="/oss/integrations/text_embedding/fireworks" arrow="true" cta="View guide"></Card>
<Card title="Google Gemini" icon="link" href="/oss/integrations/text_embedding/google_generative_ai" arrow="true" cta="View guide"></Card>
<Card title="Google Vertex AI" icon="link" href="/oss/integrations/text_embedding/google_vertex_ai_palm" arrow="true" cta="View guide"></Card>
<Card title="GPT4All" icon="link" href="/oss/integrations/text_embedding/gpt4all" arrow="true" cta="View guide"></Card>
<Card title="Gradient" icon="link" href="/oss/integrations/text_embedding/gradient" arrow="true" cta="View guide"></Card>
<Card title="GreenNode" icon="link" href="/oss/integrations/text_embedding/greennode" arrow="true" cta="View guide"></Card>
<Card title="Hugging Face" icon="link" href="/oss/integrations/text_embedding/huggingfacehub" arrow="true" cta="View guide"></Card>
<Card title="IBM watsonx.ai" icon="link" href="/oss/integrations/text_embedding/ibm_watsonx" arrow="true" cta="View guide"></Card>
<Card title="Infinity" icon="link" href="/oss/integrations/text_embedding/infinity" arrow="true" cta="View guide"></Card>
<Card title="Instruct Embeddings" icon="link" href="/oss/integrations/text_embedding/instruct_embeddings" arrow="true" cta="View guide"></Card>
<Card title="IPEX-LLM CPU" icon="link" href="/oss/integrations/text_embedding/ipex_llm" arrow="true" cta="View guide"></Card>
<Card title="IPEX-LLM GPU" icon="link" href="/oss/integrations/text_embedding/ipex_llm_gpu" arrow="true" cta="View guide"></Card>
<Card title="Intel Extension for Transformers" icon="link" href="/oss/integrations/text_embedding/itrex" arrow="true" cta="View guide"></Card>
<Card title="Jina" icon="link" href="/oss/integrations/text_embedding/jina" arrow="true" cta="View guide"></Card>
<Card title="John Snow Labs" icon="link" href="/oss/integrations/text_embedding/johnsnowlabs_embedding" arrow="true" cta="View guide"></Card>
<Card title="LASER" icon="link" href="/oss/integrations/text_embedding/laser" arrow="true" cta="View guide"></Card>
<Card title="Lindorm" icon="link" href="/oss/integrations/text_embedding/lindorm" arrow="true" cta="View guide"></Card>
<Card title="Llama.cpp" icon="link" href="/oss/integrations/text_embedding/llamacpp" arrow="true" cta="View guide"></Card>
<Card title="Llamafile" icon="link" href="/oss/integrations/text_embedding/llamafile" arrow="true" cta="View guide"></Card>
<Card title="LLMRails" icon="link" href="/oss/integrations/text_embedding/llm_rails" arrow="true" cta="View guide"></Card>
<Card title="LocalAI" icon="link" href="/oss/integrations/text_embedding/localai" arrow="true" cta="View guide"></Card>
<Card title="MiniMax" icon="link" href="/oss/integrations/text_embedding/minimax" arrow="true" cta="View guide"></Card>
<Card title="MistralAI" icon="link" href="/oss/integrations/text_embedding/mistralai" arrow="true" cta="View guide"></Card>
<Card title="Model2Vec" icon="link" href="/oss/integrations/text_embedding/model2vec" arrow="true" cta="View guide"></Card>
<Card title="ModelScope" icon="link" href="/oss/integrations/text_embedding/modelscope_embedding" arrow="true" cta="View guide"></Card>
<Card title="MosaicML" icon="link" href="/oss/integrations/text_embedding/mosaicml" arrow="true" cta="View guide"></Card>
<Card title="Naver" icon="link" href="/oss/integrations/text_embedding/naver" arrow="true" cta="View guide"></Card>
<Card title="Nebius" icon="link" href="/oss/integrations/text_embedding/nebius" arrow="true" cta="View guide"></Card>
<Card title="Netmind" icon="link" href="/oss/integrations/text_embedding/netmind" arrow="true" cta="View guide"></Card>
<Card title="NLP Cloud" icon="link" href="/oss/integrations/text_embedding/nlp_cloud" arrow="true" cta="View guide"></Card>
<Card title="Nomic" icon="link" href="/oss/integrations/text_embedding/nomic" arrow="true" cta="View guide"></Card>
<Card title="NVIDIA NIMs" icon="link" href="/oss/integrations/text_embedding/nvidia_ai_endpoints" arrow="true" cta="View guide"></Card>
<Card title="Oracle Cloud Infrastructure" icon="link" href="/oss/integrations/text_embedding/oci_generative_ai" arrow="true" cta="View guide"></Card>
<Card title="Ollama" icon="link" href="/oss/integrations/text_embedding/ollama" arrow="true" cta="View guide"></Card>
<Card title="OpenClip" icon="link" href="/oss/integrations/text_embedding/open_clip" arrow="true" cta="View guide"></Card>
<Card title="OpenAI" icon="link" href="/oss/integrations/text_embedding/openai" arrow="true" cta="View guide"></Card>
<Card title="OpenVINO" icon="link" href="/oss/integrations/text_embedding/openvino" arrow="true" cta="View guide"></Card>
<Card title="Optimum Intel" icon="link" href="/oss/integrations/text_embedding/optimum_intel" arrow="true" cta="View guide"></Card>
<Card title="Oracle AI Vector Search" icon="link" href="/oss/integrations/text_embedding/oracleai" arrow="true" cta="View guide"></Card>
<Card title="OVHcloud" icon="link" href="/oss/integrations/text_embedding/ovhcloud" arrow="true" cta="View guide"></Card>
<Card title="Pinecone Embeddings" icon="link" href="/oss/integrations/text_embedding/pinecone" arrow="true" cta="View guide"></Card>
<Card title="PredictionGuard" icon="link" href="/oss/integrations/text_embedding/predictionguard" arrow="true" cta="View guide"></Card>
<Card title="PremAI" icon="link" href="/oss/integrations/text_embedding/premai" arrow="true" cta="View guide"></Card>
<Card title="SageMaker" icon="link" href="/oss/integrations/text_embedding/sagemaker-endpoint" arrow="true" cta="View guide"></Card>
<Card title="SambaNovaCloud" icon="link" href="/oss/integrations/text_embedding/sambanova" arrow="true" cta="View guide"></Card>
<Card title="SambaStudio" icon="link" href="/oss/integrations/text_embedding/sambastudio" arrow="true" cta="View guide"></Card>
<Card title="Self Hosted" icon="link" href="/oss/integrations/text_embedding/self-hosted" arrow="true" cta="View guide"></Card>
<Card title="Sentence Transformers" icon="link" href="/oss/integrations/text_embedding/sentence_transformers" arrow="true" cta="View guide"></Card>
<Card title="Solar" icon="link" href="/oss/integrations/text_embedding/solar" arrow="true" cta="View guide"></Card>
<Card title="SpaCy" icon="link" href="/oss/integrations/text_embedding/spacy_embedding" arrow="true" cta="View guide"></Card>
<Card title="SparkLLM" icon="link" href="/oss/integrations/text_embedding/sparkllm" arrow="true" cta="View guide"></Card>
<Card title="TensorFlow Hub" icon="link" href="/oss/integrations/text_embedding/tensorflowhub" arrow="true" cta="View guide"></Card>
<Card title="Text Embeddings Inference" icon="link" href="/oss/integrations/text_embedding/text_embeddings_inference" arrow="true" cta="View guide"></Card>
<Card title="TextEmbed" icon="link" href="/oss/integrations/text_embedding/textembed" arrow="true" cta="View guide"></Card>
<Card title="Titan Takeoff" icon="link" href="/oss/integrations/text_embedding/titan_takeoff" arrow="true" cta="View guide"></Card>
<Card title="Together AI" icon="link" href="/oss/integrations/text_embedding/together" arrow="true" cta="View guide"></Card>
<Card title="Upstage" icon="link" href="/oss/integrations/text_embedding/upstage" arrow="true" cta="View guide"></Card>
<Card title="Volc Engine" icon="link" href="/oss/integrations/text_embedding/volcengine" arrow="true" cta="View guide"></Card>
<Card title="Voyage AI" icon="link" href="/oss/integrations/text_embedding/voyageai" arrow="true" cta="View guide"></Card>
<Card title="Xinference" icon="link" href="/oss/integrations/text_embedding/xinference" arrow="true" cta="View guide"></Card>
<Card title="YandexGPT" icon="link" href="/oss/integrations/text_embedding/yandex" arrow="true" cta="View guide"></Card>
<Card title="ZhipuAI" icon="link" href="/oss/integrations/text_embedding/zhipuai" arrow="true" cta="View guide"></Card>
</Columns>
