ANAKIN / README.md
AlexBlck's picture
Update README.md
c655694
|
raw
history blame
No virus
3.19 kB
metadata
license: cc-by-4.0
task_categories:
  - video-classification
  - visual-question-answering
language:
  - en
pretty_name: 'ANAKIN: manipulated videos and mask annotations'
size_categories:
  - 1K<n<10K

ANAKIN

ANAKIN is a dataset of mANipulated videos and mAsK annotatIoNs. To our best knowledge, ANAKIN is the first real-world dataset of professionally edited video clips, paired with source videos, edit descriptions and binary mask annotations of the edited regions. ANAKIN consists of 1023 videos in total, including 352 edited videos from the VideoSham dataset plus 671 new videos collected from the Vimeo platform.

arxiv

Data Format

Label Description
video-id Video ID
full* Full length original video
trimmed Short clip trimmed from full
edited Manipulated version of trimmed
masks* Per-frame binary masks, annotating the manipulation
start-time* Trim beginning time (in seconds)
end-time* Trim end time (in seconds)
task Task given to the video editor
manipulation-type One of the 5 manipulation types: splicing, inpainting, swap, audio, frame-level
editor-id Editor ID

*There are several subset configurations available. The choice depends on whether you need to download full length videos and/or you only need the videos with masks available. start-time and end-time will be returned for subset configs with full videos in them.

config full masks train/val/test
all yes maybe 681/98/195
no-full no maybe 716/102/205
has-masks no yes 297/43/85
full-masks yes yes 297/43/85

Example

The data can either be downloaded or streamed.

Downloaded

from datasets import load_dataset
from torchvision.io import read_video

config = 'no-full' # ['all', 'no-full', 'has-masks', 'full-masks']
dataset = load_dataset("AlexBlck/ANAKIN", config, nproc=8)

for sample in dataset['train']: # ['train', 'validation', 'test']
    trimmed_video, trimmed_audio, _ = read_video(sample['trimmed'], output_format="TCHW")
    edited_video, edited_audio, _ = read_video(sample['edited'], output_format="TCHW")
    masks = sample['masks']
    print(sample.keys())

Streamed

from datasets import load_dataset
import cv2

dataset = load_dataset("AlexBlck/ANAKIN", streaming=True)

sample = next(iter(dataset['train'])) # ['train', 'validation', 'test']
cap = cv2.VideoCapture(sample['trimmed'])

while(cap.isOpened()):
    ret, frame = cap.read()
    # ...