# Completion Function - completion() The Input params are **exactly the same** as the OpenAI Create chat completion, and let you call **Azure OpenAI, Anthropic, Cohere, Replicate, OpenRouter** models in the same format. In addition, liteLLM allows you to pass in the following **Optional** liteLLM args: `force_timeout`, `azure`, `logger_fn`, `verbose` ## Input - Request Body # Request Body **Required Fields** - `model`: *string* - ID of the model to use. Refer to the model endpoint compatibility table for details on which models work with the Chat API. - `messages`: *array* - A list of messages comprising the conversation so far. *Note* - Each message in the array contains the following properties: - `role`: *string* - The role of the message's author. Roles can be: system, user, assistant, or function. - `content`: *string or null* - The contents of the message. It is required for all messages, but may be null for assistant messages with function calls. - `name`: *string (optional)* - The name of the author of the message. It is required if the role is "function". The name should match the name of the function represented in the content. It can contain characters (a-z, A-Z, 0-9), and underscores, with a maximum length of 64 characters. - `function_call`: *object (optional)* - The name and arguments of a function that should be called, as generated by the model. **Optional Fields** - `functions`: *array* - A list of functions that the model may use to generate JSON inputs. Each function should have the following properties: - `name`: *string* - The name of the function to be called. It should contain a-z, A-Z, 0-9, underscores and dashes, with a maximum length of 64 characters. - `description`: *string (optional)* - A description explaining what the function does. It helps the model to decide when and how to call the function. - `parameters`: *object* - The parameters that the function accepts, described as a JSON Schema object. - `function_call`: *string or object (optional)* - Controls how the model responds to function calls. - `temperature`: *number or null (optional)* - The sampling temperature to be used, between 0 and 2. Higher values like 0.8 produce more random outputs, while lower values like 0.2 make outputs more focused and deterministic. - `top_p`: *number or null (optional)* - An alternative to sampling with temperature. It instructs the model to consider the results of the tokens with top_p probability. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered. - `n`: *integer or null (optional)* - The number of chat completion choices to generate for each input message. - `stream`: *boolean or null (optional)* - If set to true, it sends partial message deltas. Tokens will be sent as they become available, with the stream terminated by a [DONE] message. - `stop`: *string/ array/ null (optional)* - Up to 4 sequences where the API will stop generating further tokens. - `max_tokens`: *integer (optional)* - The maximum number of tokens to generate in the chat completion. - `presence_penalty`: *number or null (optional)* - It is used to penalize new tokens based on their existence in the text so far. - `frequency_penalty`: *number or null (optional)* - It is used to penalize new tokens based on their frequency in the text so far. - `logit_bias`: *map (optional)* - Used to modify the probability of specific tokens appearing in the completion. - `user`: *string (optional)* - A unique identifier representing your end-user. This can help OpenAI to monitor and detect abuse.