---
title: "Ollama LLM"
description: "Ollama is a popular open-source command-line tool and engine that allows you to download quantized versions of the most popular LLM chat models"
---

import Image from "next/image";

<Image
  src="/images/anythingllm-setup/llm-configuration/local/ollama/header-image.png"
  height={1080}
  width={1920}
  quality={100}
  alt="Ollama LLM"
/>

# Ollama LLM

[Ollama](https://ollama.com) is a popular [open-source](https://github.com/ollama/ollama) command-line tool and engine that allows you to download quantized versions of the most popular LLM chat models.

Ollama is a _separate_ application that you need to download first and connect to. Ollama supports both running LLMs on CPU and GPU.

## Connecting to Ollama

When running ollama locally, you should connect to Ollama with `http://127.0.0.1:11434` when using the default settings.

You can update your model to a different model at any time in the **Settings**.

<Image
  src="/images/anythingllm-setup/llm-configuration/local/ollama/ollama-llm.png"
  height={1080}
  width={1920}
  quality={100}
  alt="Ollama LLM settings"
/>
