---
title: "Ollama Setup and Troubleshooting Guide"
description: "Quick solutions for setting up Ollama with AnythingLLM"
---

import { Callout, Tabs } from "nextra/components";
import Image from "next/image";

<Image
  src="/images/faq/ollama-models-not-loading/header-image.png"
  height={1080}
  width={1920}
  quality={100}
  alt="AnythingLLM and Ollama Setup Guide"
/>

# Ollama Connection Troubleshooting

## Ensure Ollama is Running

Before attempting any fixes or URL changes, verify that Ollama is running properly on your device:

1. Open your web browser and navigate to `http://127.0.0.1:11434`
2. You should see a page similar to this:

<Image
  src="/images/faq/ollama-models-not-loading/ollama-running.png"
  height={1080}
  width={1920}
  quality={100}
  alt="Ollama running in background"
/>

If you don't see this page, troubleshoot your Ollama installation and ensure that it is running properly before moving forward.

## Automatic URL Detection (LLM & Embedding Providers)

<Callout type="info" emoji="ℹ️">
  AnythingLLM features automatic URL detection for Ollama. Manual configuration
  is only necessary if auto-detection fails.
</Callout>
### URL Successfully Detected When selecting the Ollama provider, AnythingLLM attempts
to auto detect your Ollama URL. If the option to input the base URL is hidden, the
URL was automatically detected by AnythingLLM.
<Image
  src="/images/faq/ollama-models-not-loading/ollama-detected-collapsed.png"
  height={1080}
  width={1920}
  quality={100}
  style={{
    "border-radius": "20px",
    "margin-top": "20px",
    "object-fit": "none",
    "object-position": "-320px top",
    width: "100%",
    height: "600px",
  }}
  alt="Ollama URL automatically detected"
/>
### URL Detection Failed When manual endpoint input is expanded, the URL was not
able to be detected.
<Image
  src="/images/faq/ollama-models-not-loading/ollama-cannot-detect.png"
  height={1080}
  width={1920}
  quality={100}
  style={{
    "border-radius": "20px",
    "margin-top": "20px",
    "object-fit": "none",
    "object-position": "-320px top",
    width: "100%",
    height: "650px",
  }}
  alt="Ollama URL failed detection"
/>
If Ollama was not started when AnythingLLM tried to detect the URL, start up Ollama
then press the `Auto-Detect` button. This should automatically detect the URL and
allow you to begin selecting the `Model` and `Max Tokens` values. ## Setting the
Correct Ollama URL
<Callout type="error" emoji="🚨">
  If AnythingLLM was unable to detect your URL automatically, this is most
  likely an issue with your Ollama setup/configuration NOT AnythingLLM.
</Callout>
If you have confirmed 100% that your Ollama installation is running properly and
is not being blocked by any firewalls etc, you can choose to set the URLs manually.

Choose your AnythingLLM version to find the correct Ollama URL:

<Tabs items={['Desktop', 'Docker', 'Self-hosted']}>
  <Tabs.Tab>
    ### Desktop Version

    Use: `http://127.0.0.1:11434`

    <Image
      src="/images/faq/ollama-models-not-loading/ollama-correct-url.png"
      height={1080}
      width={1920}
      quality={100}
      alt="Correct Ollama Base URL for AnythingLLM Desktop Version"
    />

  </Tabs.Tab>

  <Tabs.Tab>
    ### Docker Version

    - Windows/macOS: `http://host.docker.internal:11434`
    - Linux: `http://172.17.0.1:11434`
    <Callout type="warning" emoji="⚠️">
      On Linux, use `http://172.17.0.1:11434` as `host.docker.internal` doesn't work.
    </Callout>
    <Image
      src="/images/faq/ollama-models-not-loading/ollama-correct-url-docker.png"
      height={1080}
      width={1920}
      quality={100}
      style={{
        "border-radius": "20px",
        "margin-top": "20px",
        "object-fit": "none",
        "object-position": "-320px top",
        "width": "100%",
        "height": "650px",
      }}
      alt="Correct Ollama Base URL for AnythingLLM Docker Version"
    />

  </Tabs.Tab>

  <Tabs.Tab>
    ### Self-hosted Version

    Use either:
    - `http://localhost:11434`
    - `http://127.0.0.1:11434`

  </Tabs.Tab>
</Tabs>

## AnythingLLM Desktop: Built-in vs. Standalone Ollama

AnythingLLM Desktop offers two Ollama options:

1. **Built-in AnythingLLM LLM Provider**:

   - Runs a separate Ollama instance internally.
   - Models downloaded to standalone Ollama won't appear here.

2. **Standalone Ollama**:
   - Run Ollama separately on your system.
   - Use the URL `http://127.0.0.1:11434`.

<Image
  src="/images/faq/ollama-models-not-loading/anythingllm-ollama-provider.png"
  height={1080}
  width={1920}
  quality={100}
  alt="AnythingLLM built-in Ollama provider"
/>

## Troubleshooting

If you're still experiencing issues:

1. Confirm you're using the correct URL for your setup.
2. Check for firewall or network issues blocking the connection.
3. Restart both Ollama and AnythingLLM.

<Callout type="info" emoji="💡">
  If problems persist after trying these steps, please contact visit our Discord
  to ask your questions.
</Callout>
