The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support. If this is not possible, please open a discussion for direct help.

SEED-Bench Card

Benchmark details

Benchmark type: SEED-Bench-2 is a comprehensive large-scale benchmark for evaluating Multimodal Large Language Models (MLLMs), featuring 24K multiple-choice questions with precise human annotations. It spans 27 evaluation dimensions, assessing both text and image generation.

Benchmark date: SEED-Bench was collected in November 2023.

Paper or resources for more information:

License: Attribution-NonCommercial 4.0 International. It should abide by the policy of OpenAI:

Data Sources:

Please contact us if you believe any data infringes upon your rights, and we will remove it.

Where to send questions or comments about the benchmark:

Intended use

Primary intended uses: SEED-Bench-2 is primarily designed to evaluate Multimodal Large Language Models in text and image generation tasks.

Primary intended users: Researchers and enthusiasts in computer vision, natural language processing, machine learning, and artificial intelligence are the main target users of the benchmark.

Downloads last month
Edit dataset card