tttoaster commited on
Commit
8eef872
β€’
1 Parent(s): 8a1832a

Update constants.py

Browse files
Files changed (1) hide show
  1. constants.py +8 -8
constants.py CHANGED
@@ -31,22 +31,22 @@ UNTUNED_MODEL_RESULTS = '''LLM & Flan-T5 & Flan-T5-XL &23.0 &29.0
31
  LEADERBORAD_INTRODUCTION = """# SEED-Bench Leaderboard
32
 
33
  Welcome to the leaderboard of the SEED-Bench! πŸ†
34
- This is a community where participants create multimodal language models and action generation algorithms to generate API function calls based goals described in natural lanugage!
35
  Please refer to [our paper](https://arxiv.org/abs/2307.16125) for more details.
36
  """
37
 
38
- SUBMIT_INTRODUCTION = """# Submit Precautions
39
- 1. Attain JSON file from our [github repository](https://github.com/AILab-CVC/SEED-Bench#leaderboard-submit) after evaluation. For example, you can obtain InstructBLIP's JSON file as results/results.json after running
40
  ```shell
41
  python eval.py --model instruct_blip --anno_path SEED-Bench.json --output-dir results
42
  ```
43
- 2. If you want to revise a model, please ensure 'Model Name Revision' align with what's in the leaderboard. For example, if you want to modify InstructBLIP's evaluation result, you need to fill in 'InstructBLIP' in 'Revision Model Name'.
44
- 3. Please ensure the right link for each submission. Everyone could go to the model's repository through the model name in the leaderboard.
45
- 4. If you don't want to evaluate all dimensions, not evaluated dimension performance, and its corresponding average performance will be set to 0.
46
- 5. After clicking 'Submit Eval', you can click 'Refresh' to obtain the latest leaderboard.
47
 
48
  ## Submit Example
49
- For example, if you want to revise InstructBLIP's performance in the leaderboard, you need to:
50
  1. Fill in 'InstructBLIP' in 'Revision Model Name'.
51
  2. Select 'ImageLLM' in 'Model Type'.
52
  3. Fill in 'https://github.com/salesforce/LAVIS' in 'Model Link'.
 
31
  LEADERBORAD_INTRODUCTION = """# SEED-Bench Leaderboard
32
 
33
  Welcome to the leaderboard of the SEED-Bench! πŸ†
34
+ SEED-Bench consists of 19K multiple-choice questions with accurate human annotations for evaluating Multimodal LLMs, covering 12 evaluation dimensions including both the spatial and temporal understanding.
35
  Please refer to [our paper](https://arxiv.org/abs/2307.16125) for more details.
36
  """
37
 
38
+ SUBMIT_INTRODUCTION = """# Submit Introduction
39
+ 1. Obtain JSON file from our [github repository](https://github.com/AILab-CVC/SEED-Bench#leaderboard-submit) after evaluation. For example, you can obtain InstructBLIP's JSON file as results/results.json after running
40
  ```shell
41
  python eval.py --model instruct_blip --anno_path SEED-Bench.json --output-dir results
42
  ```
43
+ 2. If you want to update model performance by uploading new results, please ensure 'Model Name Revision' is the same as what's shown in the leaderboard. For example, if you want to modify InstructBLIP's performance, you need to fill in 'InstructBLIP' in 'Revision Model Name'.
44
+ 3. Please provide the correct link of your model's repository for each submission.
45
+ 4. For the evaluation dimension, you can choose "All/Image/Video", and the results of dimensions that are not evaluated will be set to zero.
46
+ 5. After clicking 'Submit Eval', you can click 'Refresh' to obtain the latest result in the leaderboard.
47
 
48
  ## Submit Example
49
+ For example, if you want to update InstructBLIP's result in the leaderboard, you need to:
50
  1. Fill in 'InstructBLIP' in 'Revision Model Name'.
51
  2. Select 'ImageLLM' in 'Model Type'.
52
  3. Fill in 'https://github.com/salesforce/LAVIS' in 'Model Link'.