Datasets:

Languages:
English
Size:
n<1K
ArXiv:
License:
File size: 4,062 Bytes
7611a9c
 
 
 
 
 
7a468a9
 
 
 
 
f4f8076
7a468a9
1c59bf4
 
 
 
7a468a9
 
 
 
f4f8076
 
 
 
 
7a468a9
 
 
f4f8076
 
 
 
 
 
 
 
 
7a468a9
 
 
 
 
 
f4f8076
 
a0cd87c
 
 
f4f8076
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7a468a9
 
 
05a979d
7a468a9
 
 
 
f4f8076
 
 
 
 
 
 
 
7a468a9
 
 
b47600c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
license: apache-2.0
language:
- en
size_categories:
- n<1K
---

# Dataset Card for Dataset Name

This is the BalanceCC benchmark published in [CCEdit](https://arxiv.org/pdf/2309.16496.pdf), containing 100 videos with varied attributes, designed to offer a comprehensive platform
for evaluating **generative video editing**, focusing on both controllability and creativity.

[Paper Link](https://arxiv.org/pdf/2309.16496.pdf)

[Project Page](https://ruoyufeng.github.io/CCEdit.github.io/)

## Dataset Details

### Dataset Description

Our objective is to develop a benchmark dataset specifically designed for tasks involving controllable and creative video editing.
Therefore, we collected 100 videos from different categories, including Animal, Human, Object, and Landscape. 
In addition, for each source video, we provided a text description and graded Camera Motion, Object Motion, and Scene Complexity on a scale from 1 to 3. 
For each video, there are four types of edit along with corresponding target prompts and Fantasy Levels (also ranging from 1 to 3), namely Style Change, Object Change, Background Change, and Compound Change. 
Our aim in doing so is to better compare the strengths and weaknesses of different methods and their areas of expertise, as well as to assist researchers in advancing their techniques.

## Dataset Structure

**BalanceCC**
- BalanceCC.json
- miniBalanceCC.json
- StatisticalResults.png
- Result
  - Animal
  - Human
  - Landscape
  - Object

[More Information Needed]


### Annotations

BalanceCC.json and miniBalanceCC.json are lists of dictionaries. Each component includes "Video Name", "Video Type", "Original Prompt", "Editing", "Camera Motion", "Object Motion", and "Scene Complexity".
"Editing" is a list that contains dictionaries of different editing targets with "Editing Type", "Target Prompt", and "Fantasy Level".
The difference between BalanceCC.json and miniBalanceCC.json is that each sample in BalanceCC.json has 4 editing targets in terms of Style Change, Object Change, Background Change, and Compound Change, while each in miniBalanceCC.json only contains one editing target of them.

Here is an example in BalanceCC.json:
```
[
    {
        "Video Name": "blackswan",
        "Video Type": "Animal",
        "Original Prompt": "A black swan swimming in a pond with lush greenery in the background.",
        "Editing": [
            { 
                "Editing Type": "Style Change",
                "Target Prompt": "A black swan swimming in a pond with lush greenery in the background, oil painting style.",
                "Fantasy Level": 1
            },
            { 
                "Editing Type": "Object Change",
                "Target Prompt": "A majestic flamingo swimming in a pond with lush greenery in the background.",
                "Fantasy Level": 1
            },
            { 
                "Editing Type": "Background Change",
                "Target Prompt": "A black swan swimming in a crystal clear lake surrounded by snow-capped mountains.",
                "Fantasy Level": 2
            },
            { 
                "Editing Type": "Multiple Change",
                "Target Prompt": "A duck made of origami floating on a pond under a cherry blossom tree in full bloom.",
                "Fantasy Level": 3
            }
        ],
        "Camera Motion": 2,
        "Object Motion": 2,
        "Scene Complexity": 2
    },
    ...
]
```


#### Annotation process

The annotation process is conducted via GPT-4V and human revision. Please refer to our [paper](https://arxiv.org/pdf/2309.16496.pdf) for detailed information.



## Citation
```
@article{feng2023ccedit,
  title={Ccedit: Creative and controllable video editing via diffusion models},
  author={Feng, Ruoyu and Weng, Wenming and Wang, Yanhui and Yuan, Yuhui and Bao, Jianmin and Luo, Chong and Chen, Zhibo and Guo, Baining},
  journal={arXiv preprint arXiv:2309.16496},
  year={2023}
}
```

## Dataset Card Contact

Ruoyu Feng's email: [ustcfry@mail.ustc.edu.cn](mailto:ustcfry@mail.ustc.edu.cn)