| cff-version: 1.2.0 | |
| title: "ALL Bench Leaderboard 2026: Unified Multi-Modal AI Evaluation" | |
| message: "If you use this dataset, please cite it as below." | |
| type: dataset | |
| authors: | |
| - name: "ALL Bench Team" | |
| url: "https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard" | |
| repository-code: "https://github.com/final-bench/ALL-Bench-Leaderboard" | |
| license: MIT | |
| version: "2.1" | |
| date-released: "2026-03-08" | |
| keywords: | |
| - ai-benchmark | |
| - llm-leaderboard | |
| - vlm | |
| - multimodal-ai | |
| - metacognition | |
| - final-bench | |
| - gpt-5 | |
| - claude | |
| - gemini | |
| abstract: >- | |
| ALL Bench Leaderboard is the only AI benchmark covering LLM, VLM, Agent, | |
| Image, Video, and Music generation in a single unified view. It cross-verifies | |
| 91 AI models across 6 modalities with a 3-tier confidence system. Features | |
| composite 5-axis scoring (Knowledge, Expert Reasoning, Abstract Reasoning, | |
| Metacognition, Execution), interactive comparison tools, and downloadable | |
| intelligence reports. Includes FINAL Bench metacognitive evaluation where | |
| Error Recovery explains 94.8% of self-correction performance variance. | |