---
title: Overview
---

Mem0 includes built-in support for various popular large language models. Memory can utilize the LLM provided by the user, ensuring efficient use for specific needs.

## Usage

To use a llm, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, and `OpenAI` will be used as the llm.

For a comprehensive list of available parameters for llm configuration, please refer to [Config](./config).

To view all supported llms, visit the [Supported LLMs](./models).

<CardGroup cols={4}>
  <Card title="OpenAI" href="/components/llms/models/openai"></Card>
    <Card title="Ollama" href="/components/llms/models/ollama"></Card>
    <Card title="Azure OpenAI" href="/components/llms/models/azure_openai"></Card>
    <Card title="Anthropic" href="/components/llms/models/anthropic"></Card>
    <Card title="Together" href="/components/llms/models/together"></Card>
    <Card title="Groq" href="/components/llms/models/groq"></Card>
    <Card title="Litellm" href="/components/llms/models/litellm"></Card>
    <Card title="Mistral AI" href="/components/llms/models/mistral_ai"></Card>
    <Card title="Google AI" href="/components/llms/models/google_ai"></Card>
    <Card title="AWS bedrock" href="/components/llms/models/aws_bedrock"></Card>
    <Card title="Gemini" href="/components/llms/models/gemini"></Card>
</CardGroup>

## Structured vs Unstructured Outputs

Mem0 supports two types of OpenAI LLM formats, each with its own strengths and use cases:

### Structured Outputs

Structured outputs are LLMs that align with OpenAI's structured outputs model:

- **Optimized for:** Returning structured responses (e.g., JSON objects)
- **Benefits:** Precise, easily parseable data
- **Ideal for:** Data extraction, form filling, API responses
- **Learn more:** [OpenAI Structured Outputs Guide](https://platform.openai.com/docs/guides/structured-outputs/introduction)

### Unstructured Outputs

Unstructured outputs correspond to OpenAI's standard, free-form text model:

- **Flexibility:** Returns open-ended, natural language responses
- **Customization:** Use the `response_format` parameter to guide output
- **Trade-off:** Less efficient than structured outputs for specific data needs
- **Best for:** Creative writing, explanations, general conversation

Choose the format that best suits your application's requirements for optimal performance and usability.
