gpt-4o / model.yml
van-qa's picture
Update model.yml
70f40f8 verified
raw
history blame contribute delete
No virus
1.2 kB
name: GPT 4o
model: gpt-4o
version: 1
files: []
# Results Preferences
top_p: 0.95
temperature: 0.7
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096 # Infer from base config.json -> max_position_embeddings
stream: true # true | false
# Engine / Model Settings
engine: openai
metadata:
author: OpenAI
description: GPT-4o ("o" for "omni") is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.
end_point: https://api.openai.com/v1/chat/completions
logo: https://i.pinimg.com/564x/08/ea/94/08ea94ca94a4b3a04037bdfc335ae00d.jpg
api_key_url: https://platform.openai.com/api-keys