Instruction
float64 | Input
stringlengths 51
2.37k
| Output
stringclasses 5
values |
---|---|---|
null | Flagship Models
GPT-4o (New)
Description: Our fastest and most affordable flagship model
Capabilities: Text and image input, text output
Context Length: 128k tokens
Pricing: $5 | Output: $15* (per 1 million tokens)
GPT-4 Turbo
Description: Our previous high-intelligence model
Capabilities: Text and image input, text output
Context Length: 128k tokens
Pricing: $10 | Output: $30* (per 1 million tokens)
GPT-3.5 Turbo
Description: Our fast, inexpensive model for simple tasks
Capabilities: Text input, text output
Context Length: 16k tokens
Pricing: $0.50 | Output: $1.50* (per 1 million tokens)
Detailed Models Description
GPT-4o
GPT-4o (“o” for “omni”) is our most advanced model. It is multimodal (accepting text or image inputs and outputting text), and it has the same high intelligence as GPT-4 Turbo but is much more efficient—it generates text 2x faster and is 50% cheaper. Additionally, GPT-4o has the best vision and performance across non-English languages of any of our models. GPT-4o is available in the OpenAI API to paying customers. Learn how to use GPT-4o in our text generation guide. | Reproducible Outputs (Beta)
Chat Completions are non-deterministic by default (which means model outputs may differ from request to request). That being said, we offer some control towards deterministic outputs by giving you access to the seed parameter and the system_fingerprint response field. |
null | Flagship Models
GPT-4o (New)
Description: Our fastest and most affordable flagship model
Capabilities: Text and image input, text output
Context Length: 128k tokens
Pricing: $5 | Output: $15* (per 1 million tokens)
GPT-4 Turbo
Description: Our previous high-intelligence model
Capabilities: Text and image input, text output
Context Length: 128k tokens
Pricing: $10 | Output: $30* (per 1 million tokens)
GPT-3.5 Turbo
Description: Our fast, inexpensive model for simple tasks
Capabilities: Text input, text output
Context Length: 16k tokens
Pricing: $0.50 | Output: $1.50* (per 1 million tokens)
*Prices per 1 million tokens | Deterministic Outputs
To receive (mostly) deterministic outputs across API calls, you can: |
null | def get_database_info(conn):
"""Return a list of dicts containing the table name and columns for each table in the database."""
table_dicts = []
for table_name in get_table_names(conn):
columns_names = get_column_names(conn, table_name)
table_dicts.append({"table_name": table_name, "column_names": columns_names})
return table_dicts
Step 2: Extract Database Schema
python
Copy code
database_schema_dict = get_database_info(conn)
database_schema_string = "\n".join(
[
f"Table: {table['table_name']}\nColumns: {', '.join(table['column_names'])}"
for table in database_schema_dict
]
)
Step 3: Define Function Specification
python
Copy code
tools = [
{
"type": "function",
"function": {
"name": "ask_database",
"description": "Use this function to answer user questions about music. Input should be a fully formed SQL query.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": f"""
SQL query extracting info to answer the user's question.
SQL should be written using this database schema:
{database_schema_string}
The query should be returned in plain text, not in JSON.
""",
}
},
"required": ["query"],
},
}
}
]
Step 4: Implement SQL Query Function
python
Copy code
def ask_database(conn, query):
"""Function to query SQLite database with a provided SQL query."""
try:
results = str(conn.execute(query).fetchall())
except Exception as e:
results = f"query failed with error: {e}"
return results
Step 5: Invoke Function Call Using Chat Completions API
python
Copy code
# Step 1: Prompt with content that may result in function call
messages = [{"role": "user", "content": "What is the name of the album with the most tracks?"}]
response = client.chat.completions.create(
model='gpt-4o',
messages=messages,
tools=tools,
tool_choice="auto"
)
response_message = response.choices[0].message
messages.append(response_message)
pretty_print_conversation(messages) | Example Deterministic Output API Call
Explore the new seed parameter in the OpenAI cookbook. |
null | python
Copy code
thread = client.beta.threads.create(
messages=[
{
"role": "user",
"content": "Create 3 data visualizations based on the trends in this file.",
"attachments": [
{
"file_id": file.id,
"tools": [{"type": "code_interpreter"}]
}
]
}
]
)
Image Input Content
Message content can contain either external image URLs or File IDs uploaded via the File API. Only models with Vision support can accept image input. Supported image content types include png, jpg, gif, and webp. When creating image files, pass purpose="vision" to allow you to later download and display the input content. | json
Copy code
{
"id": "run_qJL1kI9xxWlfE0z1yfL0fGg9",
...
"status": "requires_action",
"required_action": {
"submit_tool_outputs": {
"tool_calls": [
{
"id": "call_FthC9qRpsL5kBpwwyw6c7j4k",
"function": {
"arguments": "{\"location\": \"San Francisco, CA\"}",
"name": "get_rain_probability"
},
"type": "function"
},
{
"id": "call_RpEDoB8O0FTL9JoKTuCVFOyR",
"function": {
"arguments": "{\"location\": \"San Francisco, CA\", \"unit\": \"Fahrenheit\"}",
"name": "get_current_temperature"
},
"type": "function"
}
]
},
...
"type": "submit_tool_outputs"
}
}
Step 4: Handle Tool Calls and Submit Outputs
How you initiate a Run and submit tool_calls will differ depending on whether you are using streaming or not, although in both cases all tool_calls need to be submitted at the same time. You can then complete the Run by submitting the tool outputs from the functions you called. Pass each tool_call_id referenced in the required_action object to match outputs to each function call. |
null | json
Copy code
{
"id": "msg_abc123",
"object": "thread.message",
"created_at": 1699073585,
"thread_id": "thread_abc123",
"role": "assistant",
"content": [
{
"type": "text",
"text": {
"value": "The rows of the CSV file have been shuffled and saved to a new CSV file. You can download the shuffled CSV file from the following link:\n\n[Download Shuffled CSV File](sandbox:/mnt/data/shuffled_file.csv)",
"annotations": [
{
"type": "file_path",
"text": "sandbox:/mnt/data/shuffled_file.csv",
"start_index": 167,
"end_index": 202,
"file_path": {
"file_id": "file-abc123"
}
}
]
}
}
]
}
Input and Output Logs of Code Interpreter
By listing the steps of a Run that called Code Interpreter, you can inspect the code input and output logs of Code Interpreter: | # The thread now has a vector store with that file in its tool resources.
print(thread.tool_resources.file_search)
Step 5: Create a Run and Check the Output
Now, create a Run and observe that the model uses the File Search tool to provide a response to the user’s question. |
null | $5.00 / 1M tokens
Output: $15.00 / 1M tokens
gpt-4o-2024-05-13 | null |
null | $5.00 / 1M tokens
Output: $15.00 / 1M tokens
Vision Pricing Calculator
Resolution: 150px x 150px
Price: $0.001275
GPT-3.5 Turbo
GPT-3.5 Turbo is optimized for dialog, fast, and inexpensive for simple tasks. | null |
null | $0.50 / 1M tokens
Output: $1.50 / 1M tokens
gpt-3.5-turbo-instruct | null |
null | $1.50 / 1M tokens
Output: $2.00 / 1M tokens
Embedding Models
Build advanced search, clustering, topic modeling, and classification functionality. | null |
null | Training: $8.00 / 1M tokens
Input Usage: $3.00 / 1M tokens
Output Usage: $6.00 / 1M tokens
davinci-002 | null |
null | Training: $6.00 / 1M tokens
Input Usage: $12.00 / 1M tokens
Output Usage: $12.00 / 1M tokens
babbage-002 | null |
null | Training: $0.40 / 1M tokens
Input Usage: $1.60 / 1M tokens
Output Usage: $1.60 / 1M tokens
Assistants API
The Assistants API and its tools are billed at the chosen language model's per-token input/output rates. Additional fees for tool usage: | null |
null | $10.00 / 1M tokens
Output: $30.00 / 1M tokens
gpt-4-turbo-2024-04-09 | null |
null | $10.00 / 1M tokens
Output: $30.00 / 1M tokens
gpt-4 | null |
null | $30.00 / 1M tokens
Output: $60.00 / 1M tokens
gpt-4-32k | null |
null | $60.00 / 1M tokens
Output: $120.00 / 1M tokens
gpt-4-0125-preview | null |
null | $10.00 / 1M tokens
Output: $30.00 / 1M tokens
gpt-4-1106-preview | null |
null | $10.00 / 1M tokens
Output: $30.00 / 1M tokens
gpt-4-vision-preview | null |
null | $10.00 / 1M tokens
Output: $30.00 / 1M tokens
gpt-3.5-turbo-1106 | null |
null | $1.00 / 1M tokens
Output: $2.00 / 1M tokens
gpt-3.5-turbo-0613 | null |
null | $1.50 / 1M tokens
Output: $2.00 / 1M tokens
gpt-3.5-turbo-16k-0613 | null |
null | $3.00 / 1M tokens
Output: $4.00 / 1M tokens
gpt-3.5-turbo-0301 | null |
null | $1.50 / 1M tokens
Output: $2.00 / 1M tokens
davinci-002 | null |
null | $2.00 / 1M tokens
Output: $2.00 / 1M tokens
babbage-002 | null |
null | $0.40 / 1M tokens
Output: $0.40 / 1M tokens
FAQ
What’s a token?
Tokens are pieces of words used for natural language processing. For English text, 1 token is approximately 4 characters or 0.75 words. As a reference, the collected works of Shakespeare are about 900,000 words or 1.2M tokens. | null |
README.md exists but content is empty.
- Downloads last month
- 36