---
title: "AnythingLLM Default Transcription Model"
description: "AnythingLLM ships with a built-in LLM engine and provider that enables you to download popular and highly-rated LLMs like LLama-3, Phi-3 and more that can run locally on your CPU and GPU."
---

import { Callout } from "nextra/components";
import Image from "next/image";

<Image
  src="/images/anythingllm-setup/transcription-model-configuration/local/built-in/header-image.png"
  height={1080}
  width={1920}
  quality={100}
  alt="AnythingLLM Default Transcription Model"
/>

# AnythingLLM Default Transcription Model

<Callout type="info" emoji="️💡">
  **Note:**

    Using the local whisper model on machines with limited RAM or CPU can stall AnythingLLM when processing media files.
    We recommend at least 2GB of RAM and upload files less than 10MB.

</Callout>

AnythingLLM ships with a built-in Transcription Model [Xenova Whisper](https://huggingface.co/Xenova/whisper-small) which will automatically download on the first use.

<Image
  src="/images/anythingllm-setup/transcription-model-configuration/local/built-in/default-transcription.png"
  height={1080}
  width={1920}
  quality={100}
  alt="AnythingLLM Default Transcription Model Settings"
/>
