<a id="camel.models.litellm_model"></a>

<a id="camel.models.litellm_model.LiteLLMModel"></a>

## LiteLLMModel

```python
class LiteLLMModel(BaseModelBackend):
```

Constructor for LiteLLM backend with OpenAI compatibility.

**Parameters:**

- **model_type** (Union[ModelType, str]): Model for which a backend is created, such as GPT-3.5-turbo, Claude-2, etc.
- **model_config_dict** (Optional[Dict[str, Any]], optional): A dictionary that will be fed into:obj:`completion()`. If:obj:`None`, :obj:`LiteLLMConfig().as_dict()` will be used. (default: :obj:`None`)
- **api_key** (Optional[str], optional): The API key for authenticating with the model service. (default: :obj:`None`)
- **url** (Optional[str], optional): The url to the model service. (default: :obj:`None`)
- **token_counter** (Optional[BaseTokenCounter], optional): Token counter to use for the model. If not provided, :obj:`LiteLLMTokenCounter` will be used. (default: :obj:`None`)
- **timeout** (Optional[float], optional): The timeout value in seconds for API calls. If not provided, will fall back to the MODEL_TIMEOUT environment variable or default to 180 seconds. (default: :obj:`None`) **kwargs (Any): Additional arguments to pass to the client initialization.

<a id="camel.models.litellm_model.LiteLLMModel.__init__"></a>

### __init__

```python
def __init__(
    self,
    model_type: Union[ModelType, str],
    model_config_dict: Optional[Dict[str, Any]] = None,
    api_key: Optional[str] = None,
    url: Optional[str] = None,
    token_counter: Optional[BaseTokenCounter] = None,
    timeout: Optional[float] = None,
    **kwargs: Any
):
```

<a id="camel.models.litellm_model.LiteLLMModel._convert_response_from_litellm_to_openai"></a>

### _convert_response_from_litellm_to_openai

```python
def _convert_response_from_litellm_to_openai(self, response):
```

Converts a response from the LiteLLM format to the OpenAI format.

**Parameters:**

- **response** (LiteLLMResponse): The response object from LiteLLM.

**Returns:**

  ChatCompletion: The response object in OpenAI's format.

<a id="camel.models.litellm_model.LiteLLMModel.token_counter"></a>

### token_counter

```python
def token_counter(self):
```

**Returns:**

  BaseTokenCounter: The token counter following the model's
tokenization style.

<a id="camel.models.litellm_model.LiteLLMModel._run"></a>

### _run

```python
def _run(
    self,
    messages: List[OpenAIMessage],
    response_format: Optional[Type[BaseModel]] = None,
    tools: Optional[List[Dict[str, Any]]] = None
):
```

Runs inference of LiteLLM chat completion.

**Parameters:**

- **messages** (List[OpenAIMessage]): Message list with the chat history in OpenAI format.

**Returns:**

  ChatCompletion

<a id="camel.models.litellm_model.LiteLLMModel.check_model_config"></a>

### check_model_config

```python
def check_model_config(self):
```
