gregmialz commited on
Commit
bb6c22e
1 Parent(s): ac816a0

Update content.py

Browse files
Files changed (1) hide show
  1. content.py +9 -3
content.py CHANGED
@@ -3,6 +3,9 @@ TITLE = """<h1 align="center" id="space-title">GAIA Leaderboard</h1>"""
3
  CANARY_STRING = "" # TODO
4
 
5
  INTRODUCTION_TEXT = """
 
 
 
6
  Large language models have seen their potential capabilities increased by several orders of magnitude with the introduction of augmentations, from simple prompting adjustement to actual external tooling (calculators, vision models, ...) or online web retrieval.
7
  To evaluate the next generation of LLMs, we argue for a new kind of benchmark, simple and yet effective to measure actual progress on augmented capabilities, and therefore present GAIA. Details in the paper.
8
 
@@ -10,7 +13,11 @@ GAIA is made of 3 evaluation levels, depending on the added level of tooling and
10
  We expect the level 1 to be breakable by very good LLMs, and the level 3 to indicate a strong jump in model capabilities.
11
  Each of these levels is divided into two sets: a fully public dev set, on which people can test their models, and a test set with private answers and metadata. Results can be submitted for both validation and test.
12
 
13
- The data can be found in this space (https://huggingface.co/datasets/gaia-benchmark/GAIA). Questions are contained in `metadata.jsonl`. Some questions come with an additional file, that can be found in the same folder and whose id is given in the field `file_name`.
 
 
 
 
14
 
15
  We expect submissions to be json-line files with the following format. The first two fields are mandatory, `reasoning_trace` is optionnal:
16
  ```
@@ -19,8 +26,7 @@ We expect submissions to be json-line files with the following format. The first
19
  ...
20
  ```
21
 
22
- Scores are expressed as the percentage of correct answers for a given split.
23
-
24
  Submission made by our team are labelled "GAIA authors". While we report average scores over different runs when possible in our paper, we only report the best run in the leaderboard.
25
 
26
  Please do not repost the public dev set, nor use it in training data for your models.
 
3
  CANARY_STRING = "" # TODO
4
 
5
  INTRODUCTION_TEXT = """
6
+
7
+ # Summary
8
+
9
  Large language models have seen their potential capabilities increased by several orders of magnitude with the introduction of augmentations, from simple prompting adjustement to actual external tooling (calculators, vision models, ...) or online web retrieval.
10
  To evaluate the next generation of LLMs, we argue for a new kind of benchmark, simple and yet effective to measure actual progress on augmented capabilities, and therefore present GAIA. Details in the paper.
11
 
 
13
  We expect the level 1 to be breakable by very good LLMs, and the level 3 to indicate a strong jump in model capabilities.
14
  Each of these levels is divided into two sets: a fully public dev set, on which people can test their models, and a test set with private answers and metadata. Results can be submitted for both validation and test.
15
 
16
+ # Data
17
+
18
+ GAIA data can be found in this space (https://huggingface.co/datasets/gaia-benchmark/GAIA). It consists in ~466 questions distributed in two splits, with similar distribution of Levels. Questions are contained in `metadata.jsonl`. Some questions come with an additional file, that can be found in the same folder and whose id is given in the field `file_name`.
19
+
20
+ # Submissions
21
 
22
  We expect submissions to be json-line files with the following format. The first two fields are mandatory, `reasoning_trace` is optionnal:
23
  ```
 
26
  ...
27
  ```
28
 
29
+ Scores are expressed as the percentage of correct answers for a given split.
 
30
  Submission made by our team are labelled "GAIA authors". While we report average scores over different runs when possible in our paper, we only report the best run in the leaderboard.
31
 
32
  Please do not repost the public dev set, nor use it in training data for your models.