Datasets:

Modalities:
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
Search is not available for this dataset
image
imagewidth (px)
854
854

StableV2V: Stablizing Shape Consistency in Video-to-Video Editing

Chang Liu, Rui Li, Kaidong Zhang, Yunwei Lan, Dong Liu

[Paper] / [Project] / [GitHub] / [Models]

HuggingFace repo of the testing benchmark DAVIS-Edit proposed in the paper titled "StableV2V: Stablizing Shape Consistency in Video-to-Video Editing".

Data Structure

We follow the same data structure as the one of DAVIS, as is shown below:

DAVIS-Edit
├── Annotations                                 <----- Official annotated masks of DAVIS
  ├── bear
  ├── blackswan
  ├── ...
  └── train
├── JPEGImages                                  <----- Official video frames of DAVIS
  ├── bear
  ├── blackswan
  ├── ...
  └── train
  ├── ReferenceImages                           <----- Annotated reference images for image-based editing on DAVIS-Edit
  ├── bear.png
  ├── blackswan.png
  ├── ...
  └── train.png
├── .gitattributes
├── README.md
├── edited_video_caption_dict_image.json        <----- Annotated text descriptions for image-based editing on DAVIS-Edit
└── edited_video_caption_dict_text.json         <----- Annotated text descriptions for text-based editing on DAVIS-Edit

Specifically, edited_video_caption_dict_image.json and edited_video_caption_dict_text.json are constructed as Python dictionary, with its keys as the names of video folders in JPEGImages. For example in edited_video_caption_dict_text.json:

{
  "bear": {
    "original": "a bear walking on rocks in a zoo",
    "similar": "A panda walking on rocks in a zoo",
    "changing": "A rabbit walking on rocks in a zoo"
  },
...

The annotations of reference images contain two sub-folders, i.e., similar and changing, corresponding to the annotations for DAVIS-Edit-S and DAVIS-Edit-C, respectively, where the structure are constructed in the same folder name as that in JPEGImages.

How to use DAVIS-Edit?

We highly recommend you to index different elements in DAVIS-Edit through the annotation files. Particularly, you may refer to the script below:

import os
import json
from tqdm import tqdm
from PIL import Image

# TODO: Modify the configurations here to your local paths
frame_root = 'JPEGImages'
mask_root = 'Annotations'
reference_image_root = 'ReferenceImages/similar'            # Or 'ReferenceImages/changing'
annotation_file_path = 'edited_video_caption_dict_text.json'

# Load the annotation file
with open(annotation_file_path, 'r') as f:
  annotations = json.load(f)

# Iterate all data samples in DAVIS-Edit
for video_name in tqdm(annotations.keys()):

  # Load text prompts
  original_prompt = annotations[video_name]['original']
  similar_prompt = annotations[video_name]['similar']
  changing_prompt = annotations[video_name]['changing']

  # Load reference images
  reference_image = Image.open(os.path.join(reference_image_root, video_name + '.png'))

  # Load video frames
  video_frames = []
  for path in sorted(os.listdir(os.path.join(frame_root, video_name))):
    if path != 'Thumbs.db' and path != '.DS_store':
      video_frames.append(Image.open(os.path.join(frame_root, path)))

  # Load masks
  masks = []
  for path in sorted(os.listdir(os.path.join(mask_root, video_name))):
    if path != 'Thumbs.db' and path != '.DS_store':
      masks.append(Image.open(os.path.join(frame_root, path)))

# (add further operations that you expect in the lines below)
Downloads last month
10
Edit dataset card