FAVDBench / README.md
OpenNLPLab's picture
Update README.md
310122d
metadata
license: apache-2.0
language:
  - en
  - zh
tags:
  - FAVD
  - FAVDBench
  - Video Description
  - Audio Description
  - Audible Video Description
  - Fine-grained Description
size_categories:
  - 10K<n<100K

FAVDBench: Fine-grained Audible Video Description

🤗 Hugging Face • 🏠 GitHub • 🤖 OpenDataLab • 💬 Apply Dataset

[CVPR2023] [Project Page] [arXiv] [Demo][BibTex] [中文简介]

Introduction 简介

在CVPR2023中我们提出了精细化音视频描述任务(Fine-grained Audible Video Description, FAVD)该任务旨在提供有关可听视频的详细文本描述,包括每个对象的外观和空间位置、移动对象的动作以及视频中的声音。我们同是也为社区贡献了第一个精细化音视频描述数据集FAVDBench。对于每个视频片段,我们不仅提供一句话的视频概要,还提供4-6句描述视频的视觉细节和1-2个音频相关描述,且所有的标注都有中英文双语。

At CVPR2023, we introduced the task of Fine-grained Audible Video Description (FAVD). This task aims to provide detailed textual descriptions of audible videos, including the appearance and spatial positions of each object, the actions of moving objects, and the sounds within the video. Additionally, we contributed the first fine-grained audible video description dataset, FAVDBench, to the community. For each video segment, we offer not only a single-sentence video summary but also 4-6 sentences describing the visual details of the video and 1-2 audio-related descriptions, all annotated in both Chinese and English.

Files 文件

  • meta: metadata for raw videos

    • train, val, test: train, val, test split
    • ytid: youtube id
    • start: vid segments starting time in seconds
    • end: vid segments ending time in seconds
  • videos , audios : raw video and audio segments

    • train : train split
    • val: validation split
    • test: test split
    • 📢📢📢 Please refer to Apply Dataset to get raw video/audio data
  • annotations_en.json : annotated descirptions in English

    • id: unique data (video segment) id
    • description: audio-visual descriptioins
  • annotations_en.json : annotated descirptions in Chinese

    • id: unique data (video segment) id

    • cap, des: audio-visual descriptioins

    • dcount: count of descriptions

  • experiments: expiermental files to replicate the results outlined in the paper.

    • 📢📢📢 Please refer to GitHub Repo to get related data

MD5 checksum

file md5sum
videos/train.zip 41ddad46ffac339cb0b65dffc02eda65
videos/val.zip 35291ad23944d67212c6e47b4cc6d619
videos/test.zip 07046d205837d2e3b1f65549fc1bc4d7
audios/train.zip 50cc83eebd84f85e9b86bbd2a7517f3f
audios/val.zip 73995c5d1fcef269cc90be8a8ef6d917
audios/test.zip f72085feab6ca36060a0a073b31e8acc

Updates

Latest Version: Jan 9, 2023. Public V0.1

  1. v0.1 <Jan 9, 2023>: initial publication

License

The community usage of FAVDBench model & code requires adherence to Apache 2.0. The FAVDBench model & code supports commercial use.

Citation

If you use FAVD or FAVDBench in your research, please use the following BibTeX entry.

@InProceedings{Shen_2023_CVPR,
    author    = {Shen, Xuyang and Li, Dong and Zhou, Jinxing and Qin, Zhen and He, Bowen and Han, Xiaodong and Li, Aixuan and Dai, Yuchao and Kong, Lingpeng and Wang, Meng and Qiao, Yu and Zhong, Yiran},
    title     = {Fine-Grained Audible Video Description},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {10585-10596}
}