Update README.md
Browse files
README.md
CHANGED
@@ -41,6 +41,70 @@ dataset_info:
|
|
41 |
download_size: 54902331553
|
42 |
dataset_size: 127788930013.0
|
43 |
---
|
44 |
-
# Dataset Card for "ELSA500k_track2"
|
45 |
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
download_size: 54902331553
|
42 |
dataset_size: 127788930013.0
|
43 |
---
|
|
|
44 |
|
45 |
+
|
46 |
+
# ELSA - Multimedia use case
|
47 |
+
|
48 |
+
![daam.gif](https://cdn-uploads.huggingface.co/production/uploads/6380ccd084022715e0d49d4e/a4Sxbr5E69lox_Z9T3gHI.gif)
|
49 |
+
|
50 |
+
**ELSA Multimedia is a large collection of Deep Fake images, generated using diffusion models**
|
51 |
+
|
52 |
+
### Dataset Summary
|
53 |
+
|
54 |
+
This dataset was developed as part of the EU project ELSA. Specifically for the Multimedia use-case.
|
55 |
+
Official webpage: https://benchmarks.elsa-ai.eu/
|
56 |
+
This dataset aims to develop effective solutions for detecting and mitigating the spread of deep fake images in multimedia content. Deep fake images, which are highly realistic and deceptive manipulations, pose significant risks to privacy, security, and trust in digital media. This dataset can be used to train robust and accurate models that can identify and flag instances of deep fake images.
|
57 |
+
|
58 |
+
### ELSA versions
|
59 |
+
|
60 |
+
| Name | Description | Link |
|
61 |
+
| ------------- | ------------- | ---------------------|
|
62 |
+
| ELSA1M_track1 | Dataset of 1M images generated using diffusion model | https://huggingface.co/datasets/rs9000/ELSA1M_track1 |
|
63 |
+
| ELSA500k_track2 | Dataset of 500k images generated using diffusion model with diffusion attentive attribution maps [1] | https://huggingface.co/datasets/rs9000/ELSA500k_track2 |
|
64 |
+
|
65 |
+
|
66 |
+
```python
|
67 |
+
from daam import WordHeatMap
|
68 |
+
from datasets import load_dataset
|
69 |
+
import torch
|
70 |
+
|
71 |
+
elsa_data = load_dataset("rs9000/ELSA500k_track2", split="train", streaming=True)
|
72 |
+
for sample in elsa_data:
|
73 |
+
image = sample.pop("image")
|
74 |
+
metadata = sample
|
75 |
+
heatmaps = sample.pop("heatmaps")
|
76 |
+
heatmap_labels = sample.pop("heatmap_labels")
|
77 |
+
for j, (h, l) in enumerate(zip(heatmaps, heatmap_labels)):
|
78 |
+
heatmap = WordHeatMap(torch.Tensor(h), word=l)
|
79 |
+
heatmap.plot_overlay(image)
|
80 |
+
plt.show()
|
81 |
+
```
|
82 |
+
|
83 |
+
Using <a href="https://huggingface.co/docs/datasets/stream">streaming=True</a> lets you work with the dataset without downloading it.
|
84 |
+
|
85 |
+
## Dataset Structure
|
86 |
+
|
87 |
+
Each parquet file contains nearly 1k images and a JSON file with metadata.
|
88 |
+
|
89 |
+
The Metadata for generated images are:
|
90 |
+
|
91 |
+
- ID: Laion image ID
|
92 |
+
- original_prompt: Laion Prompt
|
93 |
+
- positive_prompt: positive prompt used for image generation
|
94 |
+
- negative_prompt: negative prompt used for image generation
|
95 |
+
- model: model used for the image generation
|
96 |
+
- nsfw: nsfw tag from Laion
|
97 |
+
- url_real_image: Url of the real image associated to the same prompt
|
98 |
+
- filepath: filepath of the fake image
|
99 |
+
- aspect_ratio: aspect ratio of the generated image
|
100 |
+
- heatmaps: diffusion attentive attribution maps
|
101 |
+
- heatmap_labels: words releated to the heatmaps
|
102 |
+
|
103 |
+
|
104 |
+
### Dataset Curators
|
105 |
+
|
106 |
+
- Leonardo Labs (rosario.dicarlo.ext@leonardo.com)
|
107 |
+
- UNIMORE (https://aimagelab.ing.unimore.it/imagelab/)
|
108 |
+
|
109 |
+
### References
|
110 |
+
[1] What the DAAM: Interpreting Stable Diffusion Using Cross Attention, 2023. Tang Raphael et al.
|