songdj commited on
Commit
e17e64b
1 Parent(s): 1a81a0d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -299,4 +299,30 @@ configs:
299
  path: preview/WikiVQA_test-*
300
  - split: WikiVQA_adv
301
  path: preview/WikiVQA_adv-*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
302
  ---
 
 
 
 
 
 
 
 
 
 
 
299
  path: preview/WikiVQA_test-*
300
  - split: WikiVQA_adv
301
  path: preview/WikiVQA_adv-*
302
+ task_categories:
303
+ - visual-question-answering
304
+ - question-answering
305
+ - text-generation
306
+ - image-to-text
307
+ - video-classification
308
+ language:
309
+ - en
310
+ tags:
311
+ - Long-context
312
+ - MLLM
313
+ - VLM
314
+ - LLM
315
+ pretty_name: MileBench
316
+ size_categories:
317
+ - 1K<n<10K
318
  ---
319
+
320
+
321
+ # MileBench
322
+
323
+ We introduce MileBench, a pioneering benchmark designed to test the **M**ult**I**modal **L**ong-cont**E**xt capabilities of MLLMs.
324
+ This benchmark comprises not only multimodal long contexts, but also multiple tasks requiring both comprehension and generation.
325
+ We establish two distinct evaluation sets, diagnostic and realistic, to systematically assess MLLMs’ long-context adaptation capacity and their ability to completetasks in long-context scenarios
326
+
327
+ To construct our evaluation sets, we gather 6,440 multimodal long-context samples from 21 pre-existing or self-constructed datasets,
328
+ with an average of 15.2 images and 422.3 words each, as depicted in the figure, and we categorize them into their respective subsets.