File size: 1,893 Bytes
0271d60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7c68759
0271d60
 
7c68759
 
 
 
 
 
 
 
 
 
 
 
0271d60
 
7c68759
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: id
    dtype: string
  - name: reddit
    dtype: string
  - name: glitch-type
    dtype: string
  - name: game
    dtype: string
  - name: source
    dtype: string
  - name: description
    dtype: string
  - name: __index_level_0__
    dtype: int64
  splits:
  - name: validation
    num_bytes: 686309290
    num_examples: 607
  download_size: 686303027
  dataset_size: 686309290
license: mit
task_categories:
- image-to-text
language:
- en
tags:
- Video Game
- Glitch
pretty_name: GlitchBench
size_categories:
- n<1K
---

# GlitchBench

This repository contains the dataset for the paper [`GlitchBench: Can large multimodal models detect video game glitches?`](https://arxiv.org/abs/2312.05291)

<div align="center">    
    <p > by 
        <a href="https://taesiri.ai">Mohammad Reza Taesiri</a>, 
        Tianjun Feng
        <a href="https://anhnguyen.me/research/">Anh Nguyen</a>, and 
        <a href="https://asgaard.ece.ualberta.ca/">Cor-Paul Bezemer</a> 
    </p>
    <p >
    (CVPR 2024)
    </p>
</div>



## Abstract

Large multimodal models (LMMs) have evolved from large language models (LLMs) to integrate multiple input modalities, such as visual inputs. This integration augments the capacity of LLMs in tasks requiring visual comprehension and reasoning. However, the extent and limitations of their enhanced abilities are not fully understood. To address this gap, we introduce GlitchBench, a novel benchmark designed to test and evaluate the common-sense reasoning and visual recognition capabilities of large multimodal models. Our dataset is curated from a variety of unusual, infrequent, and glitched scenarios from video game content and aims to challenge both the visual and linguistic reasoning powers of LMMs in detecting and interpreting out-of-the-ordinary events and scene composition.