TIP-I2V / README.md
WenhaoWang's picture
Update README.md
3681a29 verified
|
raw
history blame
6.39 kB
metadata
language:
  - en
license: cc-by-nc-4.0
size_categories:
  - 1M<n<10M
task_categories:
  - image-to-video
  - text-to-video
dataset_info:
  features:
    - name: UUID
      dtype: string
    - name: Text_Prompt
      dtype: string
    - name: Image_Prompt
      dtype: image
    - name: Subject
      dtype: string
    - name: Timestamp
      dtype: string
    - name: Text_NSFW
      dtype: float32
    - name: Image_NSFW
      dtype: string
  splits:
    - name: Full
      num_bytes: 13440652664.125
      num_examples: 1701935
    - name: Subset
      num_bytes: 790710630
      num_examples: 100000
    - name: Eval
      num_bytes: 78258893
      num_examples: 10000
  download_size: 27500759907
  dataset_size: 27750274851.25
configs:
  - config_name: default
    data_files:
      - split: Full
        path: data/Full-*
      - split: Subset
        path: data/Subset-*
      - split: Eval
        path: data/Eval-*
tags:
  - prompt
  - image-to-video
  - text-to-video

Summary

This is the dataset proposed in our paper TIP-I2V: A Million-Scale Real Prompt-Gallery Dataset for Image-to-Video Diffusion Models.

TIP-I2V is the first dataset comprising over 1.70 million unique user-provided text and image prompts. Besides the prompts, TIP-I2V also includes videos generated by five state-of-the-art image-to-video models (Pika, Stable Video Diffusion, Open-Sora, I2VGen-XL, and CogVideoX-5B). The TIP-I2V contributes to the development of better and safer image-to-video models.

Download

Download the text and (compressed) image prompts with related information

# Full (text and compressed image) prompts: ~13.4G
from datasets import load_dataset
ds = load_dataset("WenhaoWang/TIP-I2V", split='Full', streaming=True)

# Convert to Pandas format (it may be slow)
import pandas as pd
df = pd.DataFrame(ds)
# 100k subset (text and compressed image) prompts: ~0.8G
from datasets import load_dataset
ds = load_dataset("WenhaoWang/TIP-I2V", split='Subset', streaming=True)

# Convert to Pandas format (it may be slow)
import pandas as pd
df = pd.DataFrame(ds)
# 10k TIP-Eval (text and compressed image) prompts: ~0.08G
from datasets import load_dataset
ds = load_dataset("WenhaoWang/TIP-I2V", split='Eval', streaming=True)

# Convert to Pandas format (it may be slow)
import pandas as pd
df = pd.DataFrame(ds)

Download the embeddings for text and image prompts

# Embeddings for full text prompts (~21G) and image prompts (~3.5G)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Full_Text_Embedding.parquet", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Full_Image_Embedding.parquet", repo_type="dataset")
# Embeddings for 100k subset text prompts (~1.2G) and image prompts (~0.2G)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Subset_Text_Embedding.parquet", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Subset_Image_Embedding.parquet", repo_type="dataset")
# Embeddings for 10k TIP-Eval text prompts (~0.1G) and image prompts (~0.02G)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Eval_Text_Embedding.parquet", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="Embedding/Eval_Image_Embedding.parquet", repo_type="dataset")

Download uncompressed image prompts

# Full uncompressed image prompts: ~1T
from huggingface_hub import hf_hub_download
for i in range(1,52):
    hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="image_prompt_tar/image_prompt_%d.tar"%i, repo_type="dataset")
# 100k subset uncompressed image prompts: ~69.6G
from huggingface_hub import hf_hub_download
for i in range(1,3):
    hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="sub_image_prompt_tar/sub_image_prompt_%d.tar"%i, repo_type="dataset")
# 10k TIP-Eval uncompressed image prompts: ~6.5G
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_image_prompt_tar/eval_image_prompt.tar", repo_type="dataset")

Download generated videos

# Full videos generated by Pika: ~1T
from huggingface_hub import hf_hub_download
for i in range(1,52):
    hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="pika_videos_tar/pika_videos_%d.tar"%i, repo_type="dataset")
# 100k subset videos generated by Pika (~57.6G), Stable Video Diffusion (~38.9G), Open-Sora (~xxG), I2VGen-XL (~54.4G), and CogVideoX-5B (~xxG)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/pika_videos_subset_1.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/pika_videos_subset_2.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/svd_videos_subset.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/opensora_videos_subset.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/i2vgenxl_videos_subset.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="subset_videos_tar/cog_videos_subset.tar", repo_type="dataset")
# 10k TIP-Eval videos generated by Pika (~5.6G), Stable Video Diffusion (~xG), Open-Sora (~xxG), I2VGen-XL (~xxG), and CogVideoX-5B (~xxG)
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/pika_videos_eval.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/svd_videos_eval.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/opensora_videos_eval.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/i2vgenxl_videos_eval.tar", repo_type="dataset")
hf_hub_download(repo_id="WenhaoWang/TIP-I2V", filename="eval_videos_tar/cog_videos_eval.tar", repo_type="dataset")