---
title: "Migration Guide: PandasAI v2 to v3"
description: "Complete guide to migrate from PandasAI v2 to v3"
---

<Note title="Migration Notice">
  PandasAI 3.0 introduces significant architectural changes while maintaining
  backward compatibility for most use cases. This guide will help you migrate
  your existing v2 code to take advantage of v3's new features.
</Note>

## Breaking Changes

### Configuration

PandasAI v3 has simplified the configuration system. Configuration is now set globally using `pai.config.set()` instead of being passed per-dataframe. Several v2 configuration options have been removed:

**Removed Options:**
- **`save_charts`**: Replaced with `ChartResponse` objects that give users full control over chart handling
- **`enable_cache`**: Caching functionality has been completely removed from the core library
- **`security`**: Security features are now handled through the sandbox environment for code execution
- **`custom_whitelisted_dependencies`**: Replaced with sandbox-based security model
- **`save_charts_path`**: No longer needed with the new chart response system

**v2 Configuration:**
```python
from pandasai import SmartDataframe

config = {
    "llm": llm,
    "save_logs": True,
    "verbose": False,
    "max_retries": 3,
    "save_charts": True,
    "enable_cache": True,
    "security": "standard"
}

df = SmartDataframe(data, config=config)
```

**v3 Configuration:**
```python
import pandasai as pai

# Global configuration - only essential options remain
pai.config.set({
    "llm": llm,
    "save_logs": True,
    "verbose": False,
    "max_retries": 3
})

# All dataframes now use this configuration
df = pai.DataFrame(data)
```

**Key Changes:**
- **Global Configuration**: Set once with `pai.config.set()` and applies to all dataframes
- **Chart Handling**: Charts are returned as `ChartResponse` objects that you can save, display, or process as needed
- **Security**: Use the sandbox environment for secure code execution instead of configuration-based security
- **Caching**: Removed to simplify the library and reduce complexity

### LLM

LLMs are no longer built into the core library. Instead, they are provided through extensions. You must install the appropriate LLM extension separately. We partner with LiteLLM to provide a unified interface supporting 100+ models.

**v2 LLM Setup:**
```python
from pandasai.llm import OpenAI, GooglePalm, AzureOpenAI
from pandasai import SmartDataframe

llm = OpenAI(api_token="your-api-key")
df = SmartDataframe(data, config={"llm": llm})
```

**v3 LLM Setup:**
```bash
# Install LLM extension first
pip install pandasai-litellm
```

```python
import pandasai as pai
from pandasai_litellm.litellm import LiteLLM

# Global configuration (recommended)
llm = LiteLLM(model="gpt-4o-mini", api_key="your-api-key")
pai.config.set({"llm": llm})

# Now all dataframes use this LLM
df = pai.DataFrame(data)
```

**Key Changes:**
- LLMs are now extension-based, not built-in
- Install `pandasai-litellm` for unified LLM interface
- LiteLLM supports 100+ models (GPT-4, Claude, Gemini, etc.)
- Configure LLM globally instead of per-dataframe
- You need to install both `pandasai` and `pandasai-litellm`

**Supported Models:**
```python
# OpenAI
llm = LiteLLM(model="gpt-4o-mini", api_key="your-key")

# Anthropic
llm = LiteLLM(model="claude-3-opus", api_key="your-key")

# Google
llm = LiteLLM(model="gemini-pro", api_key="your-key")
```

### Data Connectors

Data connectors are now separate extensions to make the library more lightweight and efficient. You must install the specific connector extension you need. Cloud data connectors are an [enterprise feature](/v3/enterprise-features) requiring a valid license.

**v2 Connectors:**
```python
from pandasai.connectors import PostgreSQLConnector
from pandasai import SmartDataframe

connector = PostgreSQLConnector(config={
    "host": "localhost",
    "database": "mydb",
    "table": "sales"
})

df = SmartDataframe(connector)
```

**v3 Data Extensions:**
```bash
# Install SQL extension first
pip install pandasai-sql[postgres]
```

```python
import pandasai as pai

# Using semantic layer with SQL
df = pai.create(
    path="company/sales",
    description="Sales data from PostgreSQL",
    source={
        "type": "postgres",
        "connection": {
            "host": "localhost",
            "database": "mydb",
            "user": "${DB_USER}",
            "password": "${DB_PASSWORD}"
        },
        "table": "sales"
    }
)
```

**Key Changes:**
- Connectors are now separate extensions
- Install only the connectors you need (e.g., `pandasai-sql[postgres]`, `pandasai-sql[mysql]`)
- Cloud connectors (BigQuery, Snowflake, etc.) require [enterprise license](/v3/enterprise-features)
- Use `pai.create()` with semantic layer for better data management
- Environment variables supported for credentials (e.g., `${DB_USER}`)

### Skills

<Note title="Enterprise Feature">
  Skills are part of PandasAI Enterprise and require a valid enterprise license for production use. See [Enterprise Features](/v3/enterprise-features) for more details.
</Note>

Skills are now defined at the global level using the `@pai.skill` decorator. They are automatically registered and available to all dataframes and agents without manual registration.

**v2 Skills:**
```python
from pandasai.skills import skill
from pandasai import Agent

@skill
def calculate_bonus(salary: float, performance: float) -> float:
    """Calculate employee bonus based on salary and performance."""
    if performance >= 90:
        return salary * 0.15
    return salary * 0.10

agent = Agent([df])
agent.add_skills(calculate_bonus)
```

**v3 Skills:**
```python
import pandasai as pai
from pandasai import Agent

@pai.skill
def calculate_bonus(salary: float, performance: float) -> float:
    """Calculate employee bonus based on salary and performance."""
    if performance >= 90:
        return salary * 0.15
    return salary * 0.10

# Skills are automatically registered globally
agent = Agent([df])
# No need to manually add skills - they're available automatically
```

**Key Changes:**
- Use `@pai.skill` decorator instead of `@skill`
- Skills are automatically registered globally
- No need to call `agent.add_skills()` - skills are available to all agents and dataframes
- Skills work with `pai.chat()`, `SmartDataframe`, and `Agent`

### Training

Training functionality has been significantly changed in v3. The `train()` method is no longer available. Training is now only possible using vector stores for semantic search and context retrieval.

**Key Changes:**
- The `train()` method from v2 has been removed
- Training is now handled through vector stores
- Use vector stores for semantic search and context retrieval
- This approach provides better scalability and performance
- Vector store integration requires [enterprise features](/v3/enterprise-features)

## Backwards Compatibility

PandasAI v3 maintains backward compatibility for most common use cases, but we recommend adopting the new patterns for better performance and features.

### Pai DataFrames (Recommended)

While `SmartDataframe` and `SmartDatalake` classes are still available, we recommend using the new `pai.DataFrame()` and `pai.chat()` methods with semantic dataframes for better functionality.

**SmartDataframe Still Works:**
```python
from pandasai import SmartDataframe
import pandas as pd

# v2 style still works
df = pd.DataFrame({"x": [1, 2, 3], "y": [4, 5, 6]})
smart_df = SmartDataframe(df)
response = smart_df.chat("What is the sum of x?")
```

**Recommended v3 Approach:**
```python
import pandasai as pai
import pandas as pd

# Configure LLM globally
pai.config.set({"llm": llm})

# Create semantic dataframe
df = pd.DataFrame({"x": [1, 2, 3], "y": [4, 5, 6]})
df = pai.DataFrame(df)
response = df.chat("What is the sum of x?")

# Or with semantic layer for better context
df = pai.create(
    path="data/sample",
    df=df,
    description="Sample data with x and y values",
    columns={
        "x": {"type": "int", "description": "X values"},
        "y": {"type": "int", "description": "Y values"}
    }
)
response = df.chat("What is the sum of x?")
```

**SmartDatalake Replacement:**

SmartDatalake still works, but you can now query multiple dataframes directly with `pai.chat()` without instantiating a SmartDatalake.

```python
# v2 style still works
from pandasai import SmartDatalake, SmartDataframe

employees_df = pd.DataFrame(employees_data)
salaries_df = pd.DataFrame(salaries_data)

lake = SmartDatalake([
    SmartDataframe(employees_df),
    SmartDataframe(salaries_df)
])
response = lake.chat("Who gets paid the most?")

# v3 recommended approach
import pandasai as pai

employees = pai.DataFrame(employees_data)
salaries = pai.DataFrame(salaries_data)

# Query across multiple dataframes directly
response = pai.chat("Who gets paid the most?", employees, salaries)
```

### Agent (Works as Before)

The `Agent` class continues to work similarly to v2, with improved global skills support.

```python
from pandasai import Agent
import pandasai as pai

# Configure globally
pai.config.set({"llm": llm})

# Agent works as before
agent = Agent([df1, df2])
response = agent.chat("Analyze the data")

# Skills are now automatically available
@pai.skill
def custom_function():
    """Custom function"""
    pass

# No need to add skills manually - they're already available
response = agent.chat("Use custom function")
```

## Migration Steps

Follow these steps to migrate your PandasAI v2 codebase to v3:

### Step 1: Update Installation

Install PandasAI v3 and the LLM extension:

```bash
# Using pip
pip install pandasai
pip install pandasai-litellm

# Using poetry (recommended)
poetry add pandasai
poetry add pandasai-litellm
```

If you use SQL connectors, install the appropriate extension:

```bash
# For PostgreSQL
pip install pandasai-sql[postgres]

# For MySQL
pip install pandasai-sql[mysql]

# For other databases
pip install pandasai-sql[<database>]
```

### Step 2: Update Imports

Replace v2 imports with v3 equivalents:

```python
# v2 imports
from pandasai import SmartDataframe, SmartDatalake, Agent
from pandasai.llm import OpenAI
from pandasai.skills import skill
from pandasai.connectors import PostgreSQLConnector

# v3 imports
import pandasai as pai
from pandasai import Agent
from pandasai_litellm.litellm import LiteLLM
# Skills use @pai.skill decorator
# Connectors use pai.create() with source config
```

### Step 3: Configure LLM Globally

Set up your LLM configuration once at the application level:

```python
from pandasai_litellm.litellm import LiteLLM
import pandasai as pai

# Configure LLM globally
llm = LiteLLM(model="gpt-4o-mini", api_key="your-api-key")
pai.config.set({
    "llm": llm,
    "verbose": False,
    "save_logs": True,
    "max_retries": 3
})
```

### Step 4: Migrate DataFrames

Update your dataframe instantiation. You can keep using `SmartDataframe` for backward compatibility or migrate to the new `pai.DataFrame()`:

**Option A: Keep SmartDataframe (backward compatible)**
```python
from pandasai import SmartDataframe

# This still works in v3
df = SmartDataframe(your_data)
response = df.chat("Your question")
```

**Option B: Use pai.DataFrame (recommended)**
```python
import pandasai as pai

# Simple approach
df = pai.DataFrame(your_data)
response = df.chat("Your question")

# With semantic layer (best for production)
df = pai.create(
    path="company/sales-data",
    df=your_data,
    description="Sales data by country and region",
    columns={
        "country": {"type": "string", "description": "Country name"},
        "sales": {"type": "float", "description": "Sales amount in USD"}
    }
)
response = df.chat("Your question")
```

**For Multiple DataFrames:**
```python
# v2 style (still works)
from pandasai import SmartDatalake, SmartDataframe
lake = SmartDatalake([SmartDataframe(df1), SmartDataframe(df2)])

# v3 recommended
import pandasai as pai
df1 = pai.DataFrame(data1)
df2 = pai.DataFrame(data2)
response = pai.chat("Your question", df1, df2)
```

### Step 5: Migrate Data Connectors

If you use database connectors, update to the new extension-based approach:

```python
# v2
from pandasai.connectors import PostgreSQLConnector
connector = PostgreSQLConnector(config={...})
df = SmartDataframe(connector)

# v3
import pandasai as pai
df = pai.create(
    path="company/database-table",
    description="Description of your data",
    source={
        "type": "postgres",
        "connection": {
            "host": "localhost",
            "database": "mydb",
            "user": "${DB_USER}",
            "password": "${DB_PASSWORD}"
        },
        "table": "your_table"
    }
)
```

### Step 6: Update Skills (if applicable)

<Note title="Enterprise Feature">
  Skills require a valid enterprise license for production use. See [Enterprise Features](/v3/enterprise-features) for more details.
</Note>

Migrate your skills to use the new decorator:

```python
# v2
from pandasai.skills import skill
from pandasai import Agent

@skill
def calculate_metric(value: float) -> float:
    """Calculate custom metric."""
    return value * 1.5

agent = Agent([df])
agent.add_skills(calculate_metric)

# v3
import pandasai as pai
from pandasai import Agent

@pai.skill
def calculate_metric(value: float) -> float:
    """Calculate custom metric."""
    return value * 1.5

# Skills are automatically available - no need to add them
agent = Agent([df])
```

### Step 7: Remove Deprecated Configuration

Remove any deprecated configuration options from your code:

```python
# Remove these from your config:
# - save_charts
# - enable_cache
# - security
# - custom_whitelisted_dependencies
# - save_charts_path

# v2 (remove these)
config = {
    "llm": llm,
    "save_charts": True,
    "enable_cache": True,
    "security": "standard"
}

# v3 (keep only these)
pai.config.set({
    "llm": llm,
    "save_logs": True,
    "verbose": False,
    "max_retries": 3
})
```

### Step 8: Test Your Migration

Test all functionality to ensure everything works correctly:

**Basic Chat Test:**
```python
import pandasai as pai
import pandas as pd

df = pd.DataFrame({"x": [1, 2, 3], "y": [4, 5, 6]})
df = pai.DataFrame(df)
response = df.chat("What is the sum of x?")
print(response)
```

**Multi-DataFrame Test:**
```python
df1 = pai.DataFrame({"sales": [100, 200, 300]})
df2 = pai.DataFrame({"costs": [50, 100, 150]})
response = pai.chat("What is the total profit?", df1, df2)
print(response)
```

**Skills Test (if applicable):**
```python
@pai.skill
def test_skill(x: int) -> int:
    """Double the value."""
    return x * 2

df = pai.DataFrame({"values": [1, 2, 3]})
response = df.chat("Double the first value")
print(response)
```

## Common Issues and Solutions

### Issue: LLM Not Found

**Problem:** `ModuleNotFoundError: No module named 'pandasai.llm'`

**Cause:** LLMs are no longer built into the core library in v3.

**Solution:** Install the LLM extension:
```bash
pip install pandasai-litellm
```

Then update your imports:
```python
# v2 (won't work)
from pandasai.llm import OpenAI

# v3 (correct)
from pandasai_litellm.litellm import LiteLLM
llm = LiteLLM(model="gpt-4o-mini", api_key="your-key")
```

### Issue: Connector Not Found

**Problem:** `ModuleNotFoundError: No module named 'pandasai.connectors'`

**Cause:** Connectors are now separate extensions.

**Solution:** Install the appropriate connector extension:
```bash
# For SQL databases
pip install pandasai-sql[postgres]  # or mysql, sqlite, etc.
```

Then use `pai.create()` with source configuration:
```python
df = pai.create(
    path="data/table",
    description="Table description",
    source={
        "type": "postgres",
        "connection": {...},
        "table": "table_name"
    }
)
```

### Issue: Skills Not Working

**Problem:** Skills not being recognized or `@skill` decorator not found.

**Cause:** Skills decorator has changed to `@pai.skill` and registration is now automatic.

**Solution:** Update your skill definition:
```python
# v2 (won't work)
from pandasai.skills import skill
@skill
def my_skill():
    pass
agent.add_skills(my_skill)

# v3 (correct)
import pandasai as pai
@pai.skill
def my_skill():
    """Docstring is required."""
    pass
# No need to add skills - they're automatically available
```

### Issue: Configuration Not Applied

**Problem:** Configuration settings not taking effect on dataframes.

**Cause:** Configuration is now global, not per-dataframe.

**Solution:** Use global configuration before creating dataframes:
```python
# v2 (won't work the same way)
df = SmartDataframe(data, config={"llm": llm, "verbose": True})

# v3 (correct)
pai.config.set({"llm": llm, "verbose": True})
df = pai.DataFrame(data)
```

### Issue: Training Method Not Found

**Problem:** `AttributeError: 'SmartDataframe' object has no attribute 'train'`

**Cause:** The `train()` method has been removed in v3.

**Solution:** Use vector stores for training and context retrieval. This is an [enterprise feature](/v3/enterprise-features). Contact support for implementation details.

### Issue: Chart Not Saving

**Problem:** Charts are not being saved automatically.

**Cause:** `save_charts` configuration has been removed.

**Solution:** Handle chart responses manually:
```python
response = df.chat("Create a bar chart")

# Check if response is a chart
if hasattr(response, 'save'):
    response.save('chart.png')
```

### Issue: Import Error with Agent

**Problem:** `Agent` class not found or import errors.

**Cause:** Import path might be incorrect.

**Solution:** Import directly from pandasai:
```python
from pandasai import Agent
import pandasai as pai

pai.config.set({"llm": llm})
agent = Agent([df1, df2])
```

### Issue: Environment Variables Not Working

**Problem:** Database credentials with environment variables not being recognized.

**Cause:** Incorrect syntax for environment variable references.

**Solution:** Use the correct syntax with `${}`:
```python
source={
    "connection": {
        "user": "${DB_USER}",      # Correct
        "password": "${DB_PASSWORD}"  # Correct
    }
}
```
