# GAIA dataset | |
GAIA is a benchmark which aims at evaluating next-generation LLMs (LLMs with augmented capabilities due to added tooling, efficient prompting, access to search, etc). | |
Data | |
GAIA is made of more than 450 non-trivial question with an unambiguous answer, requiring different levels of tooling and autonomy to solve. It is therefore divided in 3 levels, where level 1 should be breakable by very good LLMs, and level 3 indicate a strong jump in model capabilities. Each level is divided into a fully public dev set for validation, and a test set with private answers and metadata. | |
GAIA leaderboard can be found in this space (https://huggingface.co/spaces/gaia-benchmark/leaderboard). | |
Questions are contained in metadata.jsonl. Some questions come with an additional file, that can be found in the same folder and whose id is given in the field file_name. | |
More details in [the paper](https://arxiv.org/abs/2311.12983) for now and soon here as well. |