Datasets:

Languages:
English
Size:
n<1K
ArXiv:
License:
BalanceCC / README.md
RuoyuFeng's picture
Update README.md
1c59bf4 verified
metadata
license: apache-2.0
language:
  - en
size_categories:
  - n<1K

Dataset Card for Dataset Name

This is the BalanceCC benchmark published in CCEdit, containing 100 videos with varied attributes, designed to offer a comprehensive platform for evaluating generative video editing, focusing on both controllability and creativity.

Paper Link

Project Page

Dataset Details

Dataset Description

Our objective is to develop a benchmark dataset specifically designed for tasks involving controllable and creative video editing. Therefore, we collected 100 videos from different categories, including Animal, Human, Object, and Landscape. In addition, for each source video, we provided a text description and graded Camera Motion, Object Motion, and Scene Complexity on a scale from 1 to 3. For each video, there are four types of edit along with corresponding target prompts and Fantasy Levels (also ranging from 1 to 3), namely Style Change, Object Change, Background Change, and Compound Change. Our aim in doing so is to better compare the strengths and weaknesses of different methods and their areas of expertise, as well as to assist researchers in advancing their techniques.

Dataset Structure

BalanceCC

  • BalanceCC.json
  • miniBalanceCC.json
  • StatisticalResults.png
  • Result
    • Animal
    • Human
    • Landscape
    • Object

[More Information Needed]

Annotations

BalanceCC.json and miniBalanceCC.json are lists of dictionaries. Each component includes "Video Name", "Video Type", "Original Prompt", "Editing", "Camera Motion", "Object Motion", and "Scene Complexity". "Editing" is a list that contains dictionaries of different editing targets with "Editing Type", "Target Prompt", and "Fantasy Level". The difference between BalanceCC.json and miniBalanceCC.json is that each sample in BalanceCC.json has 4 editing targets in terms of Style Change, Object Change, Background Change, and Compound Change, while each in miniBalanceCC.json only contains one editing target of them.

Here is an example in BalanceCC.json:

[
    {
        "Video Name": "blackswan",
        "Video Type": "Animal",
        "Original Prompt": "A black swan swimming in a pond with lush greenery in the background.",
        "Editing": [
            { 
                "Editing Type": "Style Change",
                "Target Prompt": "A black swan swimming in a pond with lush greenery in the background, oil painting style.",
                "Fantasy Level": 1
            },
            { 
                "Editing Type": "Object Change",
                "Target Prompt": "A majestic flamingo swimming in a pond with lush greenery in the background.",
                "Fantasy Level": 1
            },
            { 
                "Editing Type": "Background Change",
                "Target Prompt": "A black swan swimming in a crystal clear lake surrounded by snow-capped mountains.",
                "Fantasy Level": 2
            },
            { 
                "Editing Type": "Multiple Change",
                "Target Prompt": "A duck made of origami floating on a pond under a cherry blossom tree in full bloom.",
                "Fantasy Level": 3
            }
        ],
        "Camera Motion": 2,
        "Object Motion": 2,
        "Scene Complexity": 2
    },
    ...
]

Annotation process

The annotation process is conducted via GPT-4V and human revision. Please refer to our paper for detailed information.

Citation

@article{feng2023ccedit,
  title={Ccedit: Creative and controllable video editing via diffusion models},
  author={Feng, Ruoyu and Weng, Wenming and Wang, Yanhui and Yuan, Yuhui and Bao, Jianmin and Luo, Chong and Chen, Zhibo and Guo, Baining},
  journal={arXiv preprint arXiv:2309.16496},
  year={2023}
}

Dataset Card Contact

Ruoyu Feng's email: ustcfry@mail.ustc.edu.cn