# LLM - Large Language Model

In this app, LLM is used for several purposes:
1. Extracting knowledge from docs;
2. Generating responses to user queries.

## Configure LLM

After logging in with an admin account, you can configure the LLM in the admin panel.

1. Click on the `Models > LLMs` tab;
2. Click on the `New LLM` button to add a new LLM;

    ![llm-config](https://github.com/user-attachments/assets/993eec34-a99a-4acf-b4b7-a4ee8e28e3d5 "LLM Config")

3. Input your LLM information and click `Create LLM` button;
4. Done!

import { Callout } from 'nextra/components'

<Callout>
If you want to use the new LLM while answering user queries, you need switch to `Chat Engines` tab and set the new LLM as LLM.
</Callout>

## Supported LLM providers

Currently Autoflow supports the following LLM providers:

- [OpenAI](https://platform.openai.com/)
- [Google Gemini](https://gemini.google.com/)
- [Anthropic Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude)
- [Amazon Bedrock](https://aws.amazon.com/bedrock/)
- And all OpenAI-Like models:
    - [OpenRouter](https://openrouter.ai/)
        - Default config:
        ```json
        {
            "api_base": "https://openrouter.ai/api/v1/"
        }
        ```
    - [BigModel](https://open.bigmodel.cn/)
        - Default config:
        ```json
        {
            "api_base": "https://open.bigmodel.cn/api/paas/v4/",
            "is_chat_model": true
        }
        ```
    - [Ollama](https://ollama.com/)
        - Default config:
        ```json
        {
            "api_base": "http://localhost:11434"
        }
        ```
    - [vLLM](https://docs.vllm.ai/en/stable/)
        - Default config:
        ```json
        {
            "api_base": "http://localhost:8000/v1/"
        }
        ```
    - [Xinference](https://inference.readthedocs.io/en/latest/index.html)
        - Default config:
        ```json
        {
            "api_base": "http://localhost:9997/v1/"
        }
        ```
