## Different LLMs

AG2 installs OpenAI package by default. To use LLMs by other providers, you can install the following packages:

```bash
pip install ag2[openai,gemini,anthropic,mistral,together,groq,cohere,yepcode]
```

<Tip>
If you have been using `autogen` or `ag2`, all you need to do is upgrade it using:
```bash
pip install -U autogen[openai,gemini,anthropic,mistral,together,groq,cohere,yepcode]
```
or
```bash
pip install -U ag2[openai,gemini,anthropic,mistral,together,groq,cohere,yepcode]
```
as `autogen` and `ag2` are aliases for the same PyPI package.
</Tip>

Check out the [notebook](/docs/use-cases/notebooks/notebooks/autogen_uniformed_api_calling) and
[blogpost](/docs/blog/2024-06-24-AltModels-Classes/index) for more details.

## LLM Caching

To use LLM caching with Redis, you need to install the Python package with
the option `redis`:

```bash
pip install "ag2[redis]"
```

See [LLM Caching](/docs/api-reference/autogen/Cache) for details.

## IPython Code Executor

To use the IPython code executor, you need to install the `jupyter-client`
and `ipykernel` packages:

```bash
pip install "ag2[ipython]"
```

To use the IPython code executor:

```python
from autogen import UserProxyAgent

proxy = UserProxyAgent(name="proxy", code_execution_config={"executor": "ipython-embedded"})
```

## YepCode Serverless Code Executor

To use YepCode's serverless code execution platform, install AG2 with the `yepcode` option:

```bash
pip install "ag2[yepcode]"
```

YepCode provides secure, production-grade serverless code execution with automatic dependency management for both Python and JavaScript. Get your API token from [yepcode.io](https://yepcode.io).

To use the YepCode executor:

```python
from autogen.coding import YepCodeCodeExecutor
from autogen import ConversableAgent

# Set your API token as environment variable
# export YEPCODE_API_TOKEN=your_token

executor = YepCodeCodeExecutor(
    timeout=120,
    sync_execution=True
)

agent = ConversableAgent(
    "agent",
    llm_config=False,
    code_execution_config={"executor": executor},
    human_input_mode="NEVER"
)
```

Alternatively, use the factory pattern:

```python
from autogen.coding import CodeExecutorFactory

executor = CodeExecutorFactory.create({
    "executor": "yepcode",
    "yepcode": {"timeout": 120}
})
```

Example notebook: [YepCode Executor](/docs/use-cases/notebooks/notebooks/agentchat_yepcode_executor)

## retrievechat

`ag2` supports retrieval-augmented generation tasks such as question answering and code generation with RAG agents. Please install with the [retrievechat] option to use it with ChromaDB.

```bash
pip install "ag2[retrievechat]"
```

Alternatively `ag2` also supports PGVector and Qdrant which can be installed in place of ChromaDB, or alongside it.

```bash
pip install "ag2[retrievechat-pgvector]"
```

```bash
pip install "ag2[retrievechat-qdrant]"
```

RetrieveChat can handle various types of documents. By default, it can process
plain text and PDF files, including formats such as 'txt', 'json', 'csv', 'tsv',
'md', 'html', 'htm', 'rtf', 'rst', 'jsonl', 'log', 'xml', 'yaml', 'yml' and 'pdf'.
If you install [unstructured](https://unstructured-io.github.io/unstructured/installation/full_installation.html)
(`pip install "unstructured[all-docs]"`), additional document types such as 'docx',
'doc', 'odt', 'pptx', 'ppt', 'xlsx', 'eml', 'msg', 'epub' will also be supported.

You can find a list of all supported document types by using `autogen.retrieve_utils.TEXT_FORMATS`.

Example notebooks:

[Automated Code Generation and Question Answering with Retrieval Augmented Agents](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_RetrieveChat.ipynb)

[Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_groupchat_RAG.ipynb)

[Automated Code Generation and Question Answering with Qdrant based Retrieval Augmented Agents](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_RetrieveChat_qdrant.ipynb)

## Teachability

To use Teachability, please install AG2 with the [teachable] option.

```bash
pip install "ag2[teachable]"
```

Example notebook: [Chatting with a teachable agent](/docs/use-cases/notebooks/notebooks/agentchat_teachability)

## Large Multimodal Model (LMM) Agents

We offered Multimodal Conversable Agent and LLaVA Agent. Please install with the [lmm] option to use it.

```bash
pip install "ag2[lmm]"
```

Example notebook: [LLaVA Agent](/docs/use-cases/notebooks/notebooks/agentchat_lmm_llava)

## Graph

To use a graph in `GroupChat`, particularly for graph visualization, please install AG2 with the [graph] option.

```bash
pip install "ag2[graph]"
```

Example notebook: [Finite State Machine graphs to set speaker transition constraints](/docs/use-cases/notebooks/notebooks/agentchat_groupchat_finite_state_machine)

## Long Context Handling

AG2 includes support for handling long textual contexts by leveraging the LLMLingua library for text compression. To enable this functionality, please install AG2 with the `[long-context]` option:

```bash
pip install "ag2[long-context]"
```

## mathchat

`ag2<0.2` offers an experimental agent for math problem solving. Please install with the [mathchat] option to use it.

```bash
pip install "ag2[mathchat]<0.2"
```

Example notebook: [Using MathChat to Solve Math Problems](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_MathChat.ipynb)

## blendsearch

`ag2<0.2` offers a cost-effective hyperparameter optimization technique [EcoOptiGen](https://arxiv.org/abs/2303.04673) for tuning Large Language Models. Please install with the [blendsearch] option to use it.

```bash
pip install "ag2[blendsearch]<0.2"
```

Checkout [Optimize for Code Generation](https://github.com/ag2ai/ag2/blob/main/notebook/oai_completion.ipynb) and [Optimize for Math](https://github.com/ag2ai/ag2/blob/main/notebook/oai_chatgpt_gpt4.ipynb) for details.
