---
title: "Models"
description: "CAMEL-AI: Flexible integration and deployment of top LLMs and multimodal models like [OpenAI](https://openai.com/), [Mistral](https://mistral.ai/), [Gemini](https://ai.google.dev/gemini-api/docs/models), [Llama](https://www.llama.com/), [Nebius](https://nebius.com/), and more."
icon: gear-code
---

<Note type="info" title="What is a Model in CAMEL?">
  In CAMEL, every <b>model</b> refers specifically to a <b>Large Language Model (LLM)</b> the intelligent core powering your agent's understanding, reasoning, and conversational capabilities.
</Note>

Play with different models in our [interactive Colab Notebook](https://colab.research.google.com/drive/18hQLpte6WW2Ja3Yfj09NRiVY-6S2MFu7?usp=sharing).

<CardGroup cols={2}>
  <Card title="Large Language Models (LLMs)" icon="brain">
    LLMs are sophisticated AI systems trained on vast datasets to understand and generate human-like text. They reason, summarize, create content, and drive conversations effortlessly.
  </Card>

  <Card title="Flexible Model Integration" icon="plug">
    CAMEL allows quick integration and swapping of leading LLMs from providers like OpenAI, Gemini, Llama, Anthropic, Nebius, and more, helping you match the best model to your task.
  </Card>

  <Card title="Optimized for Customization" icon="sliders">
    Customize performance parameters such as temperature, token limits, and response structures easily, balancing creativity, accuracy, and efficiency.
  </Card>

  <Card title="Rapid Experimentation" icon="refresh-ccw">
    Experiment freely, CAMEL’s modular design lets you seamlessly compare and benchmark different LLMs, adapting swiftly as your project needs evolve.
  </Card>
</CardGroup>

## Supported Model Platforms in CAMEL

CAMEL supports a wide range of models, including [OpenAI’s GPT series](https://platform.openai.com/docs/models), [Meta’s Llama models](https://www.llama.com/), [DeepSeek models](https://www.deepseek.com/) (R1 and other variants), and more.

### Direct Integrations

| Model Provider   | Model Type(s) |
| :--------------  | :------------ |
| **OpenAI**       | gpt-4.5-preview<br/>gpt-4o, gpt-4o-mini<br/>o1, o1-preview, o1-mini<br/>o3-mini, o3-pro, o3<br/>o4-mini<br/>gpt-4.1, gpt-4.1-mini, gpt-4.1-nano<br/>gpt-5, gpt-5-mini, gpt-5-nano<br/>gpt-4-turbo, gpt-4, gpt-3.5-turbo |
| **Azure OpenAI** | gpt-4o, gpt-4-turbo<br/>gpt-4, gpt-3.5-turbo |
| **Mistral AI**   | mistral-large-latest, pixtral-12b-2409<br/>ministral-8b-latest, ministral-3b-latest<br/>open-mistral-nemo, codestral-latest<br/>open-mistral-7b, open-mixtral-8x7b<br/>open-mixtral-8x22b, open-codestral-mamba<br/>mistral-small-2506, mistral-medium-2508<br/>magistral-small-1.2, magistral-medium-1.2 |
| **Moonshot**     | moonshot-v1-8k<br/>moonshot-v1-32k<br/>moonshot-v1-128k |
| **Anthropic**    | claude-2.1, claude-2.0, claude-instant-1.2<br/>claude-3-opus-latest, claude-3-sonnet-20240229, claude-3-haiku-20240307<br/>claude-3-5-sonnet-latest, claude-3-5-haiku-latest, claude-3-7-sonnet-latest<br/>claude-sonnet-4-5, claude-sonnet-4-20250514, claude-opus-4-20250514, claude-opus-4-1-20250805 |
| **Gemini**       | gemini-2.5-pro, gemini-2.5-flash<br/>gemini-2.0-flash, gemini-2.0-flash-thinking-exp<br/> gemini-2.0-flash-lite|
| **Lingyiwanwu**  | yi-lightning, yi-large, yi-medium<br/>yi-large-turbo, yi-vision, yi-medium-200k<br/>yi-spark, yi-large-rag, yi-large-fc |
| **Qwen**         | qwen3-coder-plus, qwq-32b-preview, qwq-plus, qvq-72b-preview, qwen-max, qwen-plus, qwen-turbo, qwen-long<br/>qwen-plus-latest, qwen-plus-2025-04-28, qwen-turbo-latest, qwen-turbo-2025-04-28<br/>qwen-vl-max, qwen-vl-plus, qwen-vl-72b-instruct, qwen-math-plus, qwen-math-turbo, qwen-coder-turbo<br/>qwen2.5-coder-32b-instruct, qwen2.5-72b-instruct, qwen2.5-32b-instruct, qwen2.5-14b-instruct |
| **DeepSeek**     | deepseek-chat<br/>deepseek-reasoner |
| **CometAPI**     | **All models available on [CometAPI](https://api.cometapi.com/pricing)**<br/>Including: gpt-5-chat-latest, gpt-5, gpt-5-mini, gpt-5-nano<br/>claude-opus-4-1-20250805, claude-sonnet-4-20250514, claude-3-7-sonnet-latest<br/>gemini-2.5-pro, gemini-2.5-flash, grok-4-0709, grok-3<br/>deepseek-v3.1, deepseek-v3, deepseek-r1-0528, qwen3-30b-a3b |
| **Nebius**       | **All models available on [Nebius AI Studio](https://studio.nebius.com/)**<br/>Including: gpt-oss-120b, gpt-oss-20b, GLM-4.5<br/>DeepSeek V3 & R1, LLaMA, Mistral, and more |
| **ZhipuAI**      | glm-4, glm-4v, glm-4v-flash<br/>glm-4v-plus-0111, glm-4-plus, glm-4-air<br/>glm-4-air-0111, glm-4-airx, glm-4-long<br/>glm-4-flashx, glm-zero-preview, glm-4-flash, glm-3-turbo |
| **InternLM**     | internlm3-latest, internlm3-8b-instruct<br/>internlm2.5-latest, internlm2-pro-chat |
| **Reka**         | reka-core, reka-flash, reka-edge |
| **COHERE**       | command-r-plus, command-r, command-light, command, command-nightly |
| **ERNIE**        | ernie-x1-turbo-32k, ernie-x1-32k, ernie-x1-32k-preview<br/>ernie-4.5-turbo-128k, ernie-4.5-turbo-32k<br/>deepseek-v3, deepseek-r1, qwen3-235b-a22b |
| **MiniMax**      | MiniMax-M2, MiniMax-M2-Stable |


### API & Connector Platforms

| Model Platform   | Supported via API/Connector |
| :--------------  | :-------------------------- |
| **GROQ**         | [supported models](https://console.groq.com/docs/models) |
| **TOGETHER AI**  | [supported models](https://docs.together.ai/docs/dedicated-models) |
| **SambaNova**    | [supported models](https://docs.sambanova.ai/cloud/docs/get-started/supported-models) |
| **Ollama**       | [supported models](https://ollama.com/library) |
| **OpenRouter**   | [supported models](https://openrouter.ai/models) |
| **PPIO**         | [supported models](https://ppio.com/model-api/console) |
| **LiteLLM**      | [supported models](https://docs.litellm.ai/docs/providers) |
| **LMStudio**     | [supported models](https://lmstudio.ai/models) |
| **vLLM**         | [supported models](https://docs.vllm.ai/en/latest/models/supported_models.html) |
| **SGLANG**       | [supported models](https://docs.sglang.ai/supported_models/generative_models.html ) |
| **NetMind**      | [supported models](https://www.netmind.ai/modelsLibrary) |
| **NOVITA**       | [supported models](https://novita.ai/models?utm_source=github_owl&utm_medium=github_readme&utm_campaign=github_link) |
| **NVIDIA**       | [supported models](https://docs.api.nvidia.com/nim/reference/llm-apis) |
| **AIML**         | [supported models](https://docs.aimlapi.com/api-overview/model-database/text-models) |
| **ModelScope**   | [supported models](https://www.modelscope.cn/docs/model-service/API-Inference/intro) |
| **AWS Bedrock**  | [supported models](https://us-west-2.console.aws.amazon.com/bedrock/home?region=us-west-2#/) |
| **IBM WatsonX**  | [supported models](https://jp-tok.dataplatform.cloud.ibm.com/samples?context=wx&tab=foundation-model) |
| **Crynux**       | [supported models](https://docs.crynux.ai/application-development/how-to-run-llm-using-crynux-network/supported-models) |
| **SiliconFlow**  | [supported models](https://cloud.siliconflow.cn/me/models) |
| **AMD**          | dvue-aoai-001-gpt-4.1 |
| **Volcano**      | [supported models](https://console.volcengine.com/ark) |
| **Qianfan**      | [supported models](https://cloud.baidu.com/doc/qianfan/s/rmh4stp0j) |



## How to Use Models via API Calls

Integrate your favorite models into CAMEL-AI with straightforward Python calls. Choose a provider below to see how it’s done:

<Tabs>

  <Tab title="OpenAI">

  Here's how you use OpenAI models such as GPT-4o-mini with CAMEL:

  ```python
  from camel.models import ModelFactory
  from camel.types import ModelPlatformType, ModelType
  from camel.configs import ChatGPTConfig
  from camel.agents import ChatAgent

  model = ModelFactory.create(
      model_platform=ModelPlatformType.OPENAI,
      model_type=ModelType.GPT_4O_MINI,
      model_config_dict=ChatGPTConfig(temperature=0.2).as_dict(),
  )

  agent = ChatAgent(
      system_message="You are a helpful assistant.",
      model=model
  )

  response = agent.step("Say hi to CAMEL AI community.")
  print(response.msg.content)
  ```

  </Tab>

  <Tab title="Gemini">

  Using Google's Gemini models in CAMEL:

 - **Google AI Studio** ([Quick Start](https://aistudio.google.com/)): Try models quickly in a no-code environment.
 - **API Key Setup** ([Generate Key](https://aistudio.google.com/app/apikey)): Obtain your Gemini API key to start integration.
 - **Gemini API Docs** ([Deep Dive](https://ai.google.dev/gemini-api/docs)): Explore detailed Gemini API capabilities.

  ```python
  from camel.models import ModelFactory
  from camel.types import ModelPlatformType, ModelType
  from camel.configs import GeminiConfig
  from camel.agents import ChatAgent

  model = ModelFactory.create(
      model_platform=ModelPlatformType.GEMINI,
      model_type=ModelType.GEMINI_2_5_PRO,
      model_config_dict=GeminiConfig(temperature=0.2).as_dict(),
  )

  agent = ChatAgent(
      system_message="You are a helpful assistant.",
      model=model
  )

  response = agent.step("Say hi to CAMEL AI community.")
  print(response.msgs[0].content)
  ```

  </Tab>

  <Tab title="Mistral">

  Integrate Mistral AI models like Mistral Medium into CAMEL:

  ```python
  from camel.models import ModelFactory
  from camel.types import ModelPlatformType, ModelType
  from camel.configs import MistralConfig
  from camel.agents import ChatAgent

  model = ModelFactory.create(
      model_platform=ModelPlatformType.MISTRAL,
      model_type=ModelType.MAGISTRAL_MEDIUM_1_2,
      model_config_dict=MistralConfig(temperature=0.0).as_dict(),
  )

  agent = ChatAgent(
      system_message="You are a helpful assistant.",
      model=model
  )

  response = agent.step("Say hi to CAMEL AI community.")
  print(response.msgs[0].content)
  ```

  </Tab>

  <Tab title="Anthropic">

  Leveraging Anthropic's Claude models within CAMEL:

  ```python
  from camel.models import ModelFactory
  from camel.types import ModelPlatformType, ModelType
  from camel.configs import AnthropicConfig
  from camel.agents import ChatAgent

  model = ModelFactory.create(
      model_platform=ModelPlatformType.ANTHROPIC,
      model_type=ModelType.CLAUDE_3_5_SONNET,
      model_config_dict=AnthropicConfig(temperature=0.2).as_dict(),
  )

  agent = ChatAgent(
      system_message="You are a helpful assistant.",
      model=model
  )

  response = agent.step("Say hi to CAMEL AI community.")
  print(response.msgs[0].content)
  ```

  </Tab>

  <Tab title="CometAPI">

  Leverage [CometAPI](https://api.cometapi.com/)'s unified access to multiple frontier AI models:

 - **CometAPI Platform** ([CometAPI](https://www.cometapi.com/?utm_source=camel-ai&utm_campaign=integration&utm_medium=integration&utm_content=integration)):
 - **API Key Setup**: Obtain your CometAPI key to start integration.
 - **OpenAI Compatible**: Use familiar OpenAI API patterns with advanced frontier models.

  ```python
  from camel.models import ModelFactory
  from camel.types import ModelPlatformType, ModelType
  from camel.configs import CometAPIConfig
  from camel.agents import ChatAgent

  model = ModelFactory.create(
      model_platform=ModelPlatformType.COMETAPI,
      model_type=ModelType.COMETAPI_GPT_5_CHAT_LATEST,
      model_config_dict=CometAPIConfig(temperature=0.2).as_dict(),
  )

  agent = ChatAgent(
      system_message="You are a helpful assistant.",
      model=model
  )

  response = agent.step("Say hi to CAMEL AI community.")
  print(response.msgs[0].content)
  ```

  <Note type="info">
  **Flexible Model Access:** You can use any model available on CometAPI by passing the model name as a string to `model_type`, even if it's not in the predefined enums.
  </Note>

  **Environment Variables:**
  ```bash
  export COMETAPI_KEY="your_cometapi_key_here"
  export COMETAPI_API_BASE_URL="https://api.cometapi.com/v1/" # Optional
  ```

  **Model Support:**
  - **Complete Access:** All models available on [CometAPI](https://api.cometapi.com/) are supported
  - **Predefined Enums:** Common models like `COMETAPI_GPT_5_CHAT_LATEST`, `COMETAPI_CLAUDE_OPUS_4_1_20250805`, etc.
  - **String-based Access:** Use any model name directly as a string for maximum flexibility

  **Example with different models:**
  ```python
  # Access multiple frontier models through CometAPI
  models_to_try = [
      ModelType.COMETAPI_GPT_5_CHAT_LATEST,
      ModelType.COMETAPI_GPT_5,
      ModelType.COMETAPI_GPT_5_MINI,
      ModelType.COMETAPI_CLAUDE_OPUS_4_1_20250805,
      ModelType.COMETAPI_CLAUDE_SONNET_4_20250514,
      ModelType.COMETAPI_CLAUDE_3_7_SONNET_LATEST,
      ModelType.COMETAPI_GEMINI_2_5_PRO,
      ModelType.COMETAPI_GEMINI_2_5_FLASH,
      ModelType.COMETAPI_GROK_4_0709,
      ModelType.COMETAPI_GROK_3,
      ModelType.COMETAPI_DEEPSEEK_V3_1,
      ModelType.COMETAPI_DEEPSEEK_V3,
      ModelType.COMETAPI_QWEN3_30B_A3B,
      ModelType.COMETAPI_QWEN3_CODER_PLUS_2025_07_22
  ]

  for model_type in models_to_try:
      model = ModelFactory.create(
          model_platform=ModelPlatformType.COMETAPI,
          model_type=model_type
      )
      # Use the model...
  ```

  </Tab>

  <Tab title="Nebius">

  Leverage [Nebius AI Studio](https://nebius.com/)'s high-performance GPU cloud with OpenAI-compatible models:

 - **Nebius AI Studio** ([Platform](https://studio.nebius.com/)): Access powerful models through their cloud infrastructure.
 - **API Key Setup** ([Generate Key](https://studio.nebius.ai/settings/api-keys)): Obtain your Nebius API key to start integration.
 - **Nebius Docs** ([Documentation](https://nebius.com/docs/)): Explore detailed Nebius API capabilities.

  ```python
  from camel.models import ModelFactory
  from camel.types import ModelPlatformType, ModelType
  from camel.configs import NebiusConfig
  from camel.agents import ChatAgent

  model = ModelFactory.create(
      model_platform=ModelPlatformType.NEBIUS,
      model_type=ModelType.NEBIUS_GPT_OSS_120B,
      model_config_dict=NebiusConfig(temperature=0.2).as_dict(),
  )

  agent = ChatAgent(
      system_message="You are a helpful assistant.",
      model=model
  )

  response = agent.step("Say hi to CAMEL AI community.")
  print(response.msgs[0].content)
  ```

  <Note type="info">
    **Flexible Model Access:** You can use any model available on Nebius by passing the model name as a string to `model_type`, even if it's not in the predefined enums.
  </Note>

  **Environment Variables:**
  ```bash
  export NEBIUS_API_KEY="your_nebius_api_key"
  export NEBIUS_API_BASE_URL="https://api.studio.nebius.com/v1"  # Optional
  ```

  **Model Support:**
  - **Complete Access:** All models available on [Nebius AI Studio](https://studio.nebius.com/) are supported
  - **Predefined Enums:** Common models like `NEBIUS_GPT_OSS_120B`, `NEBIUS_DEEPSEEK_V3`, etc.
  - **String-based Access:** Use any model name directly as a string for maximum flexibility

  **Example with any model:**
  ```python
  # Use any model available on Nebius
  model = ModelFactory.create(
      model_platform=ModelPlatformType.NEBIUS,
      model_type="your-custom-model-name"  # Any Nebius model
  )
  ```

  </Tab>


  <Tab title="Qwen">

  Leverage [Qwen](https://qwenlm.github.io/)'s state-of-the-art models for coding and reasoning:

  ```python
  from camel.models import ModelFactory
  from camel.types import ModelPlatformType, ModelType
  from camel.configs import QwenConfig
  from camel.agents import ChatAgent

  model = ModelFactory.create(
      model_platform=ModelPlatformType.QWEN,
      model_type=ModelType.QWEN_2_5_CODER_32B,
      model_config_dict=QwenConfig(temperature=0.2).as_dict(),
  )

  agent = ChatAgent(system_message="You are a helpful assistant.", model=model)
  response = agent.step("Give me Python code to develop a trading bot.")
  print(response.msgs[0].content)
  ```

  </Tab>

   <Tab title="OpenRouter">
  Access a wide variety of models through [OpenRouter](https://openrouter.ai/)'s unified API:

  **Setup:** Set your OpenRouter API key as an environment variable:
  ```bash
  export OPENROUTER_API_KEY="your-api-key-here"
  ```

  ```python
  from camel.models import ModelFactory
  from camel.types import ModelPlatformType, ModelType
  from camel.configs import OpenRouterConfig
  from camel.agents import ChatAgent

  # Using predefined OpenRouter models
  model = ModelFactory.create(
      model_platform=ModelPlatformType.OPENROUTER,
      model_type=ModelType.OPENROUTER_LLAMA_3_1_70B,
      model_config_dict=OpenRouterConfig(temperature=0.2).as_dict(),
  )

  agent = ChatAgent(
      system_message="You are a helpful assistant.",
      model=model
  )

  response = agent.step("Say hi to CAMEL AI community.")
  print(response.msgs[0].content)
  ```

  <Note type="info">
    CAMEL supports several predefined OpenRouter models including:
    - `OPENROUTER_LLAMA_3_1_405B` - Meta's Llama 3.1 405B model
    - `OPENROUTER_LLAMA_3_1_70B` - Meta's Llama 3.1 70B model
    - `OPENROUTER_LLAMA_4_MAVERICK` - Meta's Llama 4 Maverick model
    - `OPENROUTER_LLAMA_4_SCOUT` - Meta's Llama 4 Scout model
    - `OPENROUTER_OLYMPICODER_7B` - Open R1's OlympicCoder 7B model
    - `OPENROUTER_HORIZON_ALPHA` - Horizon Alpha model

    Free versions are also available for some models (e.g., `OPENROUTER_LLAMA_4_MAVERICK_FREE`).
  </Note>

  You can also use any OpenRouter model via the OpenAI-compatible interface:

  ```python
  import os
  from camel.models import ModelFactory
  from camel.types import ModelPlatformType

  # Use any model available on OpenRouter
  model = ModelFactory.create(
      model_platform=ModelPlatformType.OPENAI_COMPATIBLE_MODEL,
      model_type="anthropic/claude-3.5-sonnet",  # Any OpenRouter model
      url="https://openrouter.ai/api/v1",
      api_key=os.getenv("OPENROUTER_API_KEY"),
      model_config_dict={"temperature": 0.2},
  )

  agent = ChatAgent(
      system_message="You are a helpful assistant.",
      model=model
  )

  response = agent.step("Explain quantum computing in simple terms.")
  print(response.msgs[0].content)
  ```

  **Available Models:** View the full list of models available through OpenRouter at [openrouter.ai/models](https://openrouter.ai/models).

  </Tab>

  <Tab title="Groq">

  Using [Groq](https://groq.com/)'s powerful models (e.g., Llama 3.3-70B):

  ```python
  from camel.models import ModelFactory
  from camel.types import ModelPlatformType, ModelType
  from camel.configs import GroqConfig
  from camel.agents import ChatAgent

  model = ModelFactory.create(
      model_platform=ModelPlatformType.GROQ,
      model_type=ModelType.GROQ_LLAMA_3_3_70B,
      model_config_dict=GroqConfig(temperature=0.2).as_dict(),
  )

  agent = ChatAgent(
      system_message="You are a helpful assistant.",
      model=model
  )

  response = agent.step("Say hi to CAMEL AI community.")
  print(response.msgs[0].content)
  ```

  </Tab>

</Tabs>


## Using On-Device Open Source Models

<Card title="Run Open-Source LLMs Locally" icon="osi">
  Unlock true flexibility: CAMEL-AI supports running popular LLMs right on your own machine. Use Ollama, vLLM, or SGLang to experiment, prototype, or deploy privately (no cloud required).
</Card>

CAMEL-AI makes it easy to integrate local open-source models as part of your agent workflows. Here’s how you can get started with the most popular runtimes:

<Steps>
  <Step title="Using Ollama for Llama 3">
    <Steps>
      <Step title="Install Ollama">
        <a href="https://ollama.com/download" target="_blank">Download Ollama</a> and follow the installation steps for your OS.
      </Step>
      <Step title="Pull the Llama 3 model">
        ```bash
        ollama pull llama3
        ```
      </Step>
      <Step title="(Optional) Create a Custom Model">
        Create a file named <code>Llama3ModelFile</code>:
        ```
        FROM llama3

        PARAMETER temperature 0.8
        PARAMETER stop Result

        SYSTEM """ """
        ```
        You can also create a shell script <code>setup_llama3.sh</code>:
        ```bash
        #!/bin/zsh
        model_name="llama3"
        custom_model_name="camel-llama3"
        ollama pull $model_name
        ollama create $custom_model_name -f ./Llama3ModelFile
        chmod +x setup_llama3.sh
        ./setup_llama3.sh
        ```
      </Step>
      <Step title="Integrate with CAMEL-AI">
        ```python
        from camel.agents import ChatAgent
        from camel.models import ModelFactory
        from camel.types import ModelPlatformType

        ollama_model = ModelFactory.create(
            model_platform=ModelPlatformType.OLLAMA,
            model_type="llama3",
            url="http://localhost:11434/v1",
            model_config_dict={"temperature": 0.4},
        )
        agent = ChatAgent("You are a helpful assistant.", model=ollama_model)
        response = agent.step("Say hi to CAMEL")
        print(response.msg.content)
        ```
      </Step>
    </Steps>
  </Step>

  <Step title="Using vLLM for Phi-3">
    <Steps>
      <Step title="Install vLLM">
        <a href="https://docs.vllm.ai/en/latest/getting_started/installation.html" target="_blank">Follow the vLLM installation guide</a> for your environment.
      </Step>
      <Step title="Start the vLLM server">
        ```bash
        python -m vllm.entrypoints.openai.api_server \
          --model microsoft/Phi-3-mini-4k-instruct \
          --api-key vllm --dtype bfloat16
        ```
      </Step>
      <Step title="Integrate with CAMEL-AI">
        ```python
        from camel.agents import ChatAgent
        from camel.models import ModelFactory
        from camel.types import ModelPlatformType

        vllm_model = ModelFactory.create(
            model_platform=ModelPlatformType.VLLM,
            model_type="microsoft/Phi-3-mini-4k-instruct",
            url="http://localhost:8000/v1",
            model_config_dict={"temperature": 0.0},
        )
        agent = ChatAgent("You are a helpful assistant.", model=vllm_model)
        response = agent.step("Say hi to CAMEL AI")
        print(response.msg.content)
        ```
      </Step>
    </Steps>
  </Step>

  <Step title="Using SGLang for Meta-Llama">
    <Steps>
      <Step title="Install SGLang">
        <a href="https://sgl-project.github.io/start/install.html" target="_blank">Follow the SGLang install instructions</a> for your platform.
      </Step>
      <Step title="Integrate with CAMEL-AI">
        ```python
        from camel.agents import ChatAgent
        from camel.models import ModelFactory
        from camel.types import ModelPlatformType

        sglang_model = ModelFactory.create(
            model_platform=ModelPlatformType.SGLANG,
            model_type="meta-llama/Llama-3.2-1B-Instruct",
            model_config_dict={"temperature": 0.0},
            api_key="sglang",
        )
        agent = ChatAgent("You are a helpful assistant.", model=sglang_model)
        response = agent.step("Say hi to CAMEL AI")
        print(response.msg.content)
        ```
      </Step>
    </Steps>
  </Step>
</Steps>

<Card
  title="Looking for more examples?"
  icon="book"
  href="https://github.com/camel-ai/camel/tree/master/examples/models"
>
  Explore the full <b>CAMEL-AI Examples</b> library for advanced workflows, tool integrations, and multi-agent demos.
</Card>

## Next Steps

You’ve now seen how to connect, configure, and optimize models with CAMEL-AI.

<Card
  title="Continue: Working with Messages"
  icon="arrow-right"
  href="https://docs.camel-ai.org/key_modules/messages"
>
  Learn how to create, format, and convert <b>BaseMessage</b> objects—the backbone of agent conversations in CAMEL-AI.
</Card>
