File size: 3,651 Bytes
48ad1d3
 
07453a5
 
 
 
 
 
 
 
 
 
 
48ad1d3
07453a5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bfb22e7
07453a5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
afeea39
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---

license: apache-2.0
language:
- en
tags:
- vlm
- reasoning
- multimodal
- nli
size_categories:
- n<1K
task_categories:
- visual-question-answering
---


# **NL-Eye Benchmark**

Will a Visual Language Model (VLM)-based bot warn us about slipping if it detects a wet floor? 
Recent VLMs have demonstrated impressive capabilities, yet their ability to infer outcomes and causes remains underexplored. To address this, we introduce **NL-Eye**, a benchmark designed to assess VLMs' **visual abductive reasoning skills**. 
NL-Eye adapts the **abductive Natural Language Inference (NLI)** task to the visual domain, requiring models to evaluate the **plausibility of hypothesis images** based on a premise image and explain their decisions. The dataset contains **350 carefully curated triplet examples** (1,050 images) spanning diverse reasoning categories, temporal categories and domains.
NL-Eye represents a crucial step toward developing **VLMs capable of robust multimodal reasoning** for real-world applications, such as accident-prevention bots and generated video verification.

project page: [NL-Eye project page](https://venturamor.github.io/NLEye/)

preprint: [NL-Eye arxiv](https://arxiv.org/abs/2410.02613)

---

## **Dataset Structure**
The dataset contains:
- A **CSV file** with annotations (`test_set.csv`).
- An **images directory** with subdirectories for each sample (`images/`).

### **CSV Fields:**
| Field                          | Type     | Description                                                    |
|--------------------------------|----------|----------------------------------------------------------------|
| `sample_id`                    | `int`    | Unique identifier for each sample.                             |
| `reasoning_category`           | `string` | One of the six reasoning categories (physical, functional, logical, emotional, cultural, or social). |
| `domain`                       | `string` | One of the ten domain categories (e.g., education, technology).     |
| `time_direction`               | `string` | One of three directions (e.g., forward, backward, parallel).                 |
| `time_duration`                | `string` | One of three durations (e.g., short, long, parallel).                  |
| `premise_description`          | `string` | Description of the premise.                               |
| `plausible_hypothesis_description` | `string` | Description of the plausible hypothesis.                        |
| `implausible_hypothesis_description` | `string` | Description of the implausible hypothesis.                      |
| `gold_explanation`             | `string` | The gold explanation for the sample's plausibility.             |
| `additional_valid_human_explanations` | `string` (optional) | Extra human-generated (crowd-workers) explanations for explanation diversity. |

> **Note**: Not all samples contain `additional_valid_human_explanations`.



---



### **Images Directory Structure:**

The `images/` directory contains **subdirectories named after each `sample_id`**. Each subdirectory includes:

- **`premise.png`**: Image showing the premise.

- **`hypothesis1.png`**: Plausible hypothesis.

- **`hypothesis2.png`**: Implausible hypothesis.



## **Usage**

This dataset is **only for test purposes**. 



### Citation

```bibtex

@misc{ventura2024nleye,

  title={NL-Eye: Abductive NLI for Images},

  author={Mor Ventura and Michael Toker and Nitay Calderon and Zorik Gekhman and Yonatan Bitton and Roi Reichart},

  year={2024},

  eprint={2410.02613},

  archivePrefix={arXiv},

  primaryClass={cs.CV}

}