nnnn / docs /my-website /docs /exception_mapping.md
nonhuman's picture
Upload 225 files
4ec8dba

Exception Mapping

LiteLLM maps exceptions across all providers to their OpenAI counterparts.

  • Rate Limit Errors
  • Invalid Request Errors
  • Authentication Errors
  • Timeout Errors openai.APITimeoutError
  • ServiceUnavailableError
  • APIError
  • APIConnectionError

Base case we return APIConnectionError

All our exceptions inherit from OpenAI's exception types, so any error-handling you have for that, should work out of the box with LiteLLM.

For all cases, the exception returned inherits from the original OpenAI Exception but contains 3 additional attributes:

  • status_code - the http status code of the exception
  • message - the error message
  • llm_provider - the provider raising the exception

Usage

import litellm
import openai

try:
    response = litellm.completion(
                model="gpt-4",
                messages=[
                    {
                        "role": "user",
                        "content": "hello, write a 20 pageg essay"
                    }
                ],
                timeout=0.01, # this will raise a timeout exception
            )
except openai.APITimeoutError as e:
    print("Passed: Raised correct exception. Got openai.APITimeoutError\nGood Job", e)
    print(type(e))
    pass

Usage - Catching Streaming Exceptions

import litellm
try:
    response = litellm.completion(
        model="gpt-3.5-turbo",
        messages=[
            {
                "role": "user",
                "content": "hello, write a 20 pg essay"
            }
        ],
        timeout=0.0001, # this will raise an exception
        stream=True,
    )
    for chunk in response:
        print(chunk)
except openai.APITimeoutError as e:
    print("Passed: Raised correct exception. Got openai.APITimeoutError\nGood Job", e)
    print(type(e))
    pass
except Exception as e:
    print(f"Did not raise error `openai.APITimeoutError`. Instead raised error type: {type(e)}, Error: {e}")

Details

To see how it's implemented - check out the code

Create an issue or make a PR if you want to improve the exception mapping.

Note For OpenAI and Azure we return the original exception (since they're of the OpenAI Error type). But we add the 'llm_provider' attribute to them. See code

Custom mapping list

Base case - we return the original exception.

ContextWindowExceededError AuthenticationError InvalidRequestError RateLimitError ServiceUnavailableError
Anthropic βœ… βœ… βœ… βœ…
OpenAI βœ… βœ… βœ… βœ… βœ…
Replicate βœ… βœ… βœ… βœ… βœ…
Cohere βœ… βœ… βœ… βœ… βœ…
Huggingface βœ… βœ… βœ… βœ…
Openrouter βœ… βœ… βœ… βœ…
AI21 βœ… βœ… βœ… βœ…
VertexAI βœ…
Bedrock βœ…
Sagemaker βœ…
TogetherAI βœ… βœ… βœ… βœ…
AlephAlpha βœ… βœ… βœ… βœ… βœ…

For a deeper understanding of these exceptions, you can check out this implementation for additional insights.

The ContextWindowExceededError is a sub-class of InvalidRequestError. It was introduced to provide more granularity for exception-handling scenarios. Please refer to this issue to learn more.

Contributions to improve exception mapping are welcome