--- annotations_creators: - found language_creators: - found language: - en license: - mit multilinguality: - monolingual pretty_name: CLIP-Kinetics700 size_categories: - 100K (n_frames, 512) * vid.cap: the "caption" of the video. In this case it is the Kinetics700 label. * vid.json: additional metadata - YouTube video ID, start time, end time. ### Data Splits * Train - 536489 samples | 54 tar's * Validation - 33966 samples | 4 tar's * Test - 64532 samples | 7 tar's ## Dataset Creation ### Source Data Data was sourced from DeepMind's [Kinetics700](https://www.deepmind.com/open-source/kinetics) dataset and downloaded using [this](https://github.com/cvdfoundation/kinetics-dataset) convenient repository. ## Simple Experiments Using [this repository](https://github.com/LAION-AI/temporal-embedding-aggregation) we evaluate CLIP-Kinetics700 with the following simple methods: ### [Zero-shot Evaluation](https://github.com/LAION-AI/temporal-embedding-aggregation/blob/master/src/evaluation/zero_shot.py) | | Accuracy | | ---------------- | -------- | | Top-1 | 0.31 | | Top-5 | 0.56 | | mean(Top1, Top5) | 0.44 | ### [Linear-probe Evaluation](https://github.com/LAION-AI/temporal-embedding-aggregation/blob/master/src/evaluation/linear_probe.py) | | Accuracy | | ---------------- | -------- | | Top-1 | 0.41 | | Top-5 | 0.65 | | mean(Top1, Top5) | 0.53 |