File size: 1,763 Bytes
cd72d44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a23b50e
 
 
 
 
 
 
 
 
 
 
cd72d44
 
 
a23b50e
 
 
 
 
 
 
 
 
 
 
cd72d44
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: AnnotatorId
    dtype: string
  - name: ImgId
    dtype: string
  - name: caption
    dtype: string
  - name: Impact
    dtype: float64
  - name: image_description
    dtype: string
  - name: image_impression
    dtype: string
  - name: image_aesthetic_eval
    dtype: string
  - name: image_url
    dtype: string
  - name: source
    dtype: string
  splits:
  - name: train
    num_bytes: 2366953929.024
    num_examples: 1352
  download_size: 2214475090
  dataset_size: 2366953929.024
license: cc-by-sa-4.0
task_categories:
- image-to-text
- visual-question-answering
language:
- en
tags:
- art
pretty_name: Impressions
size_categories:
- 1K<n<10K
---
# Dataset Card for "Impressions"

## Overview

The Impressions dataset is a multimodal benchmark that consists of 4,100 unique annotations and over 1,375 image-caption pairs from the photography domain. Each annotation explores (1) the aesthetic impactfulness of a photograph, (2) image descriptions in which pragmatic inferences are welcome, (3) emotions/thoughts/beliefs that the photograph may inspire, and (4) the aesthetic elements that elicited the expressed impression.

EMNLP 2023 | [Paper](https://arxiv.org/abs/2310.17887)

## Additional Data

The Impressions dataset comes with more information that just image annotations on questions pertaining to *Pragmatic Description*, *Perception*, and *Aesthetic Evaluation*. For annotator personality and demographic metadata, as well as all *Aesthetic Impact* annotations, please see our [git repository](https://github.com/SALT-NLP/Impressions)!  


[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)