arxiv_audio / abstract /2307.03166.txt
taesiri's picture
Upload abstract/2307.03166.txt with huggingface_hub
58befae
raw
history blame
1.39 kB
We evaluate existing foundation models' video understanding capabilities using a carefully designed experiment protocol consisting of three hallmark tasks: action recognition, temporal localization, and spatiotemporal localization, eight datasets well received by the community, and four adaptation methods tailoring a foundation model (FM) for a downstream task. Moreover, we propose a scalar VideoGLUE score (VGS) to measure an FM's efficacy and efficiency when adapting to general video understanding tasks. Our main findings are as follows. First, task-specialized models significantly outperform the six FMs studied in this work, in sharp contrast to what FMs have achieved in natural language and image understanding. Second, video-native FMs, whose pretraining data contains the video modality, are generally better than image-native FMs in classifying motion-rich videos, localizing actions in time, and understanding a video of more than one action. Third, the video-native FMs can perform well on video tasks under light adaptations to downstream tasks (for example, freezing the FM backbones), while image-native FMs win in full end-to-end fine-tuning. The first two observations reveal the need and tremendous opportunities to conduct research on video-focused FMs, and the last confirms that both tasks and adaptation methods matter when it comes to the evaluation of FMs.