File size: 7,898 Bytes
0913a62 c40436c a83f92d c40436c e150a9f c40436c e0f2707 e150a9f e0f2707 a83f92d e150a9f 27cc8ee 0b30aac 41e4dc9 283d943 2d3204d c96da1e 6ccce9b 283d943 e9ee83a 41e4dc9 7ce248b b553e2e 48e90ae a1afccc 48e90ae 283d943 7917ad0 48e90ae 7917ad0 0f3e420 e72c0ae 3182d09 0f3e420 9faa859 3182d09 5547469 3182d09 0f3e420 9ac773d 0f3e420 53b1cda 02145a6 8e176e5 02145a6 53b1cda c96da1e 53b1cda 02145a6 5d44dc2 8e176e5 53b1cda e3532f3 d1f7e14 a1afccc 0f3e420 47cb437 0f3e420 a63d9a1 47cb437 5c92e47 47cb437 a63d9a1 5c92e47 8bb6d76 5c92e47 a47a2c0 14ae994 22a428e 14ae994 7f5e8a2 14ae994 b553e2e e9ee83a 5c92e47 e9ee83a b553e2e 53b1cda b553e2e 53b1cda |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 |
---
license: cc-by-nc-4.0
task_categories:
- text-to-video
- text-to-image
language:
- en
pretty_name: VidProM
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- prompts
- text-to-video
- text-to-image
- Pika
- VideoCraft2
- Text2Video-Zero
- ModelScope
- Video Generative Model Evaluation
- Text-to-Video Diffusion Model Development
- Text-to-Video Prompt Engineering
- Efficient Video Generation
- Fake Video Detection
- Video Copy Detection for Diffusion Models
configs:
- config_name: VidProM_unique
data_files: VidProM_unique.csv
---
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/VidProM/resolve/main/teasor.png" width="800">
</p>
# Summary
This is the dataset proposed in our paper "[**VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models**](https://arxiv.org/abs/2403.06098)"
VidProM is the first dataset featuring 1.67 million unique text-to-video prompts and 6.69 million videos generated from 4 different state-of-the-art diffusion models.
It inspires many exciting new research areas, such as Text-to-Video Prompt Engineering, Efficient Video Generation, Fake Video Detection, and Video Copy Detection for Diffusion Models.
# Directory
```
*DATA_PATH
*VidProM_unique.csv
*VidProM_semantic_unique.csv
*VidProM_embed.hdf5
*original_files
*generate_1_ori.html
*generate_2_ori.html
...
*pika_videos
*pika_videos_1.tar
*pika_videos_2.tar
...
*vc2_videos
*vc2_videos_1.tar
*vc2_videos_2.tar
...
*t2vz_videos
*t2vz_videos_1.tar
*t2vz_videos_2.tar
...
*ms_videos
*ms_videos_1.tar
*ms_videos_2.tar
...
*example
```
# Download
### Automatical
Install the [datasets](https://huggingface.co/docs/datasets/v1.15.1/installation.html) library first, by:
```
pip install datasets
```
Then it can be downloaded automatically with
```
import numpy as np
from datasets import load_dataset
dataset = load_dataset('WenhaoWang/VidProM')
```
### Manual
You can also download each file by ```wget```, for instance:
```
wget https://huggingface.co/datasets/WenhaoWang/VidProM/resolve/main/VidProM_unique.csv
```
### Users from China
For users from China, we cooperate with [Wisemodel](https://wisemodel.cn/home), and you can download them faster from [here](https://wisemodel.cn/datasets/WenhaoWang/VidProM).
# Explanation
``VidProM_unique.csv`` contains the UUID, prompt, time, and 6 NSFW probabilities.
It can easily be read by
```
import pandas
df = pd.read_csv("VidProM_unique.csv")
```
Below are three rows from ``VidProM_unique.csv``:
| uuid | prompt | time | toxicity | obscene | identity_attack | insult | threat | sexual_explicit |
|--------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------|----------|---------|-----------------|---------|---------|-----------------|
| 6a83eb92-faa0-572b-9e1f-67dec99b711d | Flying among clouds and stars, kitten Max discovered a world full of winged friends. Returning home, he shared his stories and everyone smiled as they imagined flying together in their dreams. | Sun Sep 3 12:27:44 2023 | 0.00129 | 0.00016 | 7e-05 | 0.00064 | 2e-05 | 2e-05 |
| 3ba1adf3-5254-59fb-a13e-57e6aa161626 | Use a clean and modern font for the text "Relate Reality 101." Add a small, stylized heart icon or a thought bubble above or beside the text to represent emotions and thoughts. Consider using a color scheme that includes warm, inviting colors like deep reds, soft blues, or soothing purples to evoke feelings of connection and intrigue. | Wed Sep 13 18:15:30 2023 | 0.00038 | 0.00013 | 8e-05 | 0.00018 | 3e-05 | 3e-05 |
| 62e5a2a0-4994-5c75-9976-2416420526f7 | zoomed out, sideview of an Grey Alien sitting at a computer desk | Tue Oct 24 20:24:21 2023 | 0.01777 | 0.00029 | 0.00336 | 0.00256 | 0.00017 | 5e-05 |
``VidProM_semantic_unique.csv`` is a semantically unique version of ``VidProM_unique.csv``.
``VidProM_embed.hdf5`` is the 3072-dim embeddings of our prompts. They are embedded by text-embedding-3-large, which is the latest text embedding model of OpenAI.
It can easily be read by
```
import numpy as np
import h5py
def read_descriptors(filename):
hh = h5py.File(filename, "r")
descs = np.array(hh["embeddings"])
names = np.array(hh["uuid"][:], dtype=object).astype(str).tolist()
return names, descs
uuid, features = read_descriptors('VidProM_embed.hdf5')
```
``original_files`` are the HTML files from [official Pika Discord](https://discord.com/invite/pika) collected by [DiscordChatExporter](https://github.com/Tyrrrz/DiscordChatExporter). You can do whatever you want with it under [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/deed.en).
``pika_videos``, ``vc2_videos``, ``t2vz_videos``, and ``ms_videos`` are the generated videos by 4 state-of-the-art text-to-video diffusion models. Each contains 30 tar files.
``example`` is a subfolder which contains 10,000 datapoints.
# Datapoint
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/VidProM/resolve/main/datapoint.png" width="800">
</p>
# Comparison with DiffusionDB
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/VidProM/resolve/main/compare_table.jpg" width="800">
</p>
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/VidProM/resolve/main/compare_visual.png" width="800">
</p>
<p align="center">
<img src="https://huggingface.co/datasets/WenhaoWang/VidProM/resolve/main/WizMap_V_D.jpg" width="800">
</p>
Click [here](https://poloclub.github.io/wizmap/?dataURL=https://huggingface.co/datasets/WenhaoWang/VidProM/resolve/main/data_vidprom_diffusiondb.ndjson&gridURL=https://huggingface.co/datasets/WenhaoWang/VidProM/resolve/main/grid_vidprom_diffusiondb.json%20) for an interactive WizMap.
Please check our paper for a detailed comparison.
# Curators
VidProM is created by [Wenhao Wang](https://wangwenhao0716.github.io/) and Professor [Yi Yang](https://scholar.google.com/citations?user=RMSuNFwAAAAJ&hl=zh-CN) from [the ReLER Lab](https://reler.net/).
# License
The prompts and videos generated by [Pika](https://discord.com/invite/pika) in our VidProM are licensed under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/deed.en). Additionally, similar to their original repositories, the videos from [VideoCraft2](https://github.com/AILab-CVC/VideoCrafter), [Text2Video-Zero](https://github.com/Picsart-AI-Research/Text2Video-Zero), and [ModelScope](https://huggingface.co/ali-vilab/modelscope-damo-text-to-video-synthesis) are released under the [Apache license](https://www.apache.org/licenses/LICENSE-2.0), the [CreativeML Open RAIL-M license](https://github.com/Picsart-AI-Research/Text2Video-Zero/blob/main/LICENSE), and the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/deed.en), respectively. Our code is released under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/deed.en).
# Citation
```
@article{wang2024vidprom,
title={VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models},
author={Wang, Wenhao and Sun, Yifan and Yang, Yi},
journal={arXiv preprint arXiv:2403.06098},
year={2024}
}
```
# Contact
If you have any questions, feel free to contact Wenhao Wang (wangwenhao0716@gmail.com).
|