---
title: "LLM Integration"
description: "Connect agents to OpenAI, Anthropic, Google, and more"
icon: "sparkles"
---

The MCPAgent works with any modern LLM provider through native SDK integration. Connect to OpenAI, Anthropic, Google, Groq, or any other provider that supports tool calling - no additional wrappers required.

## Supported Providers

The agent framework includes native support for major LLM providers:

<Info>
**Native Integration**: Unlike other frameworks, mcp-use integrates directly with provider SDKs. This means better performance, simpler configuration, and access to the latest features without waiting for wrapper updates.
</Info>

<CardGroup cols={2}>
  <Card title="OpenAI" icon="openai">
    GPT-4, GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo
  </Card>
  <Card title="Anthropic" icon="anthropic">
    Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
  </Card>
  <Card title="Google" icon="google">
    Gemini Pro, Gemini Flash, Gemini Ultra
  </Card>
  <Card title="Groq" icon="zap">
    Llama 3.1, Mixtral, and more (ultra-fast inference)
  </Card>
</CardGroup>

## Requirements

Your chosen LLM must support:

- **Tool calling**: Also known as function calling - required for MCP tool execution
- **Structured output**: For type-safe responses (optional but recommended)
- **Streaming**: For real-time response streaming (optional)

<Tip>
Most modern LLMs support these features. Check your provider's documentation to confirm tool calling support.
</Tip>

<CardGroup cols={2}>
  <Card title="OpenAI" icon="robot">
    GPT-4, GPT-4o, GPT-3.5 Turbo
  </Card>
  <Card title="Anthropic" icon="anthropic">
    Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
  </Card>
  <Card title="Google" icon="google">
    Gemini Pro, Gemini Flash, PaLM
  </Card>
  <Card title="Open Source" icon="code">
    Llama, Mistral, CodeLlama via various providers
  </Card>
</CardGroup>

## Popular Provider Examples

### OpenAI

### Anthropic Claude

### Google Gemini

### Groq (Fast Inference)

### Local Models with Ollama

## Model Requirements

### Tool Calling Support

For MCP tools to work properly, your chosen model **must support tool calling**. Most modern LLMs support this:

✅ **Supported Models:**
- OpenAI: GPT-4, GPT-4o, GPT-3.5 Turbo
- Anthropic: Claude 3+ series
- Google: Gemini Pro, Gemini Flash
- Groq: Llama 3.1, Mixtral models
- Most recent open-source models

❌ **Not Supported:**
- Basic completion models without tool calling
- Very old model versions
- Models without function calling capabilities

### Checking Tool Support

You can verify if a model supports tools:

## Model Configuration Tips

### Temperature Settings

Different tasks benefit from different temperature settings:

### Model-Specific Parameters

Each provider has unique parameters you can configure:

## Cost Optimization

### Choosing Cost-Effective Models

Consider your use case when selecting models:

| Use Case | Recommended Models | Reason |
|----------|-------------------|--------|
| Development/Testing | GPT-3.5 Turbo, Claude Haiku | Lower cost, good performance |
| Production/Complex | GPT-4o, Claude Sonnet | Best performance |
| High Volume | Groq models | Fast inference, competitive pricing |
| Privacy/Local | Ollama models | No API costs, data stays local |

### Token Management

## Environment Setup

Always use environment variables for API keys:

```bash
# .env file
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AI...
GROQ_API_KEY=gsk_...
```

## Advanced Integration

### Custom Model Wrappers

You can create custom wrappers for specialized models:

### Model Switching

Switch between models dynamically:

## Troubleshooting

### Common Issues

1. **"Model doesn't support tools"**: Ensure your model supports function calling
2. **API key errors**: Check environment variables and API key validity
3. **Rate limiting**: Implement retry logic or use different models
4. **Token limits**: Adjust max_tokens or use models with larger context windows

### Debug Model Behavior

<CodeGroup>
```typescript TypeScript
// Enable verbose logging to see model interactions
const agent = new MCPAgent({
    llm,
    client,
    verbose: true  // Shows detailed model interactions
})
```

```typescript TypeScript
// Enable verbose logging to see model interactions
const agent = new MCPAgent({
    llm,
    client,
    verbose: true  // Shows detailed model interactions
})
```
</CodeGroup>

For more LLM providers and detailed integration examples, visit the [LangChain Chat Models documentation](https://python.langchain.com/docs/integrations/chat/).
