---
title: Ollama
---

<Warning>
**You are currently on a page documenting the use of Ollama models as [text completion models](/oss/concepts/text_llms). Many popular Ollama models are [chat completion models](/oss/langchain/models).**

You may be looking for [this page instead](/oss/integrations/chat/ollama/).
</Warning>

This page goes over how to use LangChain to interact with `Ollama` models.

## Installation

```python
# install package
%pip install -U langchain-ollama
```

## Setup

First, follow [these instructions](https://github.com/ollama/ollama?tab=readme-ov-file#ollama) to set up and run a local Ollama instance:

* [Download](https://ollama.ai/download) and install Ollama onto the available supported platforms (including Windows Subsystem for Linux aka WSL, macOS, and Linux)
  * macOS users can install via Homebrew with `brew install ollama` and start with `brew services start ollama`
* Fetch available LLM model via `ollama pull <name-of-model>`
  * View a list of available models via the [model library](https://ollama.ai/library)
  * e.g., `ollama pull llama3`
* This will download the default tagged version of the model. Typically, the default points to the latest, smallest sized-parameter model.

> On Mac, the models will be download to `~/.ollama/models`
>
> On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`

* Specify the exact version of the model of interest as such `ollama pull vicuna:13b-v1.5-16k-q4_0` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)
* To view all pulled models, use `ollama list`
* To chat directly with a model from the command line, use `ollama run <name-of-model>`
* View the [Ollama documentation](https://github.com/ollama/ollama/tree/main/docs) for more commands. You can run `ollama help` in the terminal to see available commands.

## Usage

```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_ollama.llms import OllamaLLM

template = """Question: {question}

Answer: Let's think step by step."""

prompt = ChatPromptTemplate.from_template(template)

model = OllamaLLM(model="llama3.1")

chain = prompt | model

chain.invoke({"question": "What is LangChain?"})
```

```output
'To break down what LangChain is, let\'s analyze it step by step:\n\n1. **Break down the name**: "Lang" likely stands for "Language", suggesting that LangChain has something to do with language processing or AI-related tasks involving human languages.\n\n2. **Understanding the term "chain" in this context**: In technology and computing, particularly in the realm of artificial intelligence (AI) and machine learning (ML), a "chain" often refers to a series of processes linked together. This can imply that LangChain involves executing multiple tasks or functions in sequence.\n\n3. **Connection to AI/ML technologies**: Given its name and context, it\'s reasonable to infer that LangChain is involved in the field of natural language processing (NLP) or more broadly, artificial intelligence. NLP is an area within computer science concerned with the interaction between computers and humans in a human language.\n\n4. **Possible functions or services**: Considering the focus on languages and the potential for multiple linked processes, LangChain might offer various AI-driven functionalities such as:\n    - Text analysis (like sentiment analysis or text classification).\n    - Language translation.\n    - Chatbots or conversational interfaces.\n    - Content generation (e.g., articles, summaries).\n    - Dialogue management systems.\n\n5. **Conclusion**: Based on the name and analysis of its components, LangChain is likely a tool or framework for developing applications that involve complex interactions with human languages through AI and ML technologies. It possibly enables creating custom chatbots, natural language interfaces, text generators, or other applications that require intricate language understanding and processing capabilities.\n\nThis step-by-step breakdown indicates that LangChain is focused on leveraging AI to understand, process, and interact with human languages in a sophisticated manner, likely through multiple linked processes (the "chain" part).'
```

## Multi-modal

Ollama has support for multi-modal LLMs, such as [bakllava](https://ollama.com/library/bakllava) and [llava](https://ollama.com/library/llava).

    ollama pull bakllava

Be sure to update Ollama so that you have the most recent version to support multi-modal.

```python
%pip install pillow
```

```python
import base64
from io import BytesIO

from IPython.display import HTML, display
from PIL import Image


def convert_to_base64(pil_image):
    """
    Convert PIL images to Base64 encoded strings

    :param pil_image: PIL image
    :return: Re-sized Base64 string
    """

    buffered = BytesIO()
    pil_image.save(buffered, format="JPEG")  # You can change the format if needed
    img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
    return img_str


def plt_img_base64(img_base64):
    """
    Display base64 encoded string as image

    :param img_base64:  Base64 string
    """
    # Create an HTML img tag with the base64 string as the source
    image_html = f'<img src="data:image/jpeg;base64,{img_base64}" />'
    # Display the image by rendering the HTML
    display(HTML(image_html))


file_path = "../../../static/img/ollama_example_img.jpg"
pil_image = Image.open(file_path)
image_b64 = convert_to_base64(pil_image)
plt_img_base64(image_b64)
```

```html
<img src="" />
```

```python
from langchain_ollama import OllamaLLM

llm = OllamaLLM(model="bakllava")

llm_with_image_context = llm.bind(images=[image_b64])
llm_with_image_context.invoke("What is the dollar based gross retention rate:")
```

```output
'90%'
```

## API reference

For detailed documentation of all ChatOllama features and configurations head to the [API reference](https://python.langchain.com/api_reference/ollama/llms/langchain_ollama.llms.OllamaLLM.html).
