---
title: "Ollama Embedder"
description: "Ollama supports the running of both LLMs and embedding models."
---

import { Callout } from "nextra/components";
import Image from "next/image";

<Image
  src="/images/anythingllm-setup/embedder-configuration/local/ollama/header-image.png"
  height={1080}
  width={1920}
  quality={100}
  alt="Ollama Embedder"
/>

# Ollama Embedder

<Callout type="error" emoji="️‼️">
  **Heads up!**

    Ollama's `/models` endpoint will show both LLMs and Embedding models in the dropdown selection. **Please** ensure you are using an embedding model for embedding.

**llama2** for example, is an LLM. Not an embedder.

</Callout>

## Connecting to Ollama

When running ollama locally, you should connect to Ollama with `http://127.0.0.1:11434` when using the default settings.

[Ollama](https://ollama.com) supports the running of both LLMs **and** embedding models.

Please download the relevant embedding model you wish to use and select that during onboarding or in **Settings** to have your uploaded documents embed via Ollama.

You can update your model to a different model at any time in the **Settings**.

<Image
  src="/images/anythingllm-setup/embedder-configuration/local/ollama/ollama-embedder.png"
  height={1080}
  width={1920}
  quality={100}
  alt="Ollama Embedder"
/>
