File size: 4,350 Bytes
0339db8
 
 
 
 
196084c
 
 
ae7d579
 
0339db8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
196084c
 
ae7d579
 
 
 
 
0339db8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
---
configs:
- config_name: default
  data_files:
  - split: train
    path: 
        - data/train-*
        - data/val-*
  - split: validation
    path: data/validation-*
dataset_info:
  features:
  - name: id
    dtype: string
  - name: original_prompt
    dtype: string
  - name: positive_prompt
    dtype: string
  - name: negative_prompt
    dtype: string
  - name: url
    dtype: string
  - name: model_gen0
    dtype: string
  - name: model_gen1
    dtype: string
  - name: model_gen2
    dtype: string
  - name: model_gen3
    dtype: string
  - name: width_gen0
    dtype: int64
  - name: width_gen1
    dtype: int64
  - name: width_gen2
    dtype: int64
  - name: width_gen3
    dtype: int64
  - name: height_gen0
    dtype: int64
  - name: height_gen1
    dtype: int64
  - name: height_gen2
    dtype: int64
  - name: height_gen3
    dtype: int64
  - name: num_inference_steps_gen0
    dtype: int64
  - name: num_inference_steps_gen1
    dtype: int64
  - name: num_inference_steps_gen2
    dtype: int64
  - name: num_inference_steps_gen3
    dtype: int64
  - name: filepath_gen0
    dtype: string
  - name: filepath_gen1
    dtype: string
  - name: filepath_gen2
    dtype: string
  - name: filepath_gen3
    dtype: string
  - name: image_gen0
    dtype: image
  - name: image_gen1
    dtype: image
  - name: image_gen2
    dtype: image
  - name: image_gen3
    dtype: image
  splits:
  - name: train
    num_bytes: 2626848010531.5
    num_examples: 2306629
  - name: validation
    num_bytes: 5318900038.0
    num_examples: 4800
  download_size: 2568003790242
  dataset_size: 2632166910569.5
---

# ELSA - Multimedia use case

![image/gif](https://cdn-uploads.huggingface.co/production/uploads/6380ccd084022715e0d49d4e/6eRNxY1AFfaksVu8oTk8v.gif)

**ELSA Multimedia is a large collection of Deep Fake images, generated using diffusion models**

### Dataset Summary

This dataset was developed as part of the EU project ELSA. Specifically for the Multimedia use-case.
Official webpage: https://benchmarks.elsa-ai.eu/
This dataset aims to develop effective solutions for detecting and mitigating the spread of deep fake images in multimedia content. Deep fake images, which are highly realistic and deceptive manipulations, pose significant risks to privacy, security, and trust in digital media. This dataset can be used to train robust and accurate models that can identify and flag instances of deep fake images.

### ELSA versions

| Name  | Description | Link |
| ------------- | ------------- | ---------------------| 
| ELSA1M_track1  | Dataset of 1M images generated using diffusion model  | https://huggingface.co/datasets/elsaEU/ELSA1M_track1 |
| ELSA10M_track1  | Dataset of 10M images generated using four different diffusion models for each caption, multiple image compression formats, multiple aspect ration | https://huggingface.co/datasets/elsaEU/ELSA_D3 |
| ELSA500k_track2  | Dataset of 500k images generated using diffusion model with diffusion attentive attribution maps [1]  | https://huggingface.co/datasets/elsaEU/ELSA500k_track2 |


```python
from datasets import load_dataset
elsa_data = load_dataset("elsaEU/ELSA_D3", split="train", streaming=True)
```

Using <a href="https://huggingface.co/docs/datasets/stream">streaming=True</a> lets you work with the dataset without downloading it.

## Dataset Structure

Each parquet file contains nearly 1k images and a JSON file with metadata.

The Metadata for generated images are:

- ID: Laion image ID
- original_prompt: Laion Prompt
- positive_prompt: positive prompt used for image generation
- negative_prompt: negative prompt used for image generation
- url: Url of the real image associated with the same prompt
- width: width generated image
- height: height generated image
- num_inference_steps: diffusion steps of the generator
- filepath: path of the generated image
- model_gen0: Generator 0 name
- model_gen1: Generator 1 name
- model_gen2: Generator 2 name
- model_gen3: Generator 3 name
- image_gen0: image generated with generator 0
- image_gen1: image generated with generator 1
- image_gen2: image generated with generator 2
- image_gen3: image generated with generator 3
- aspect_ratio: aspect ratio of the generated image


### Dataset Curators

- Leonardo Labs (rosario.dicarlo.ext@leonardo.com)
- UNIMORE (https://aimagelab.ing.unimore.it/imagelab/)