--- license: mit --- # 10M SAM The original json was obtained from [v1.1.0](https://huggingface.co/datasets/LanguageBind/Open-Sora-Plan-v1.1.0/tree/main/anno_jsons), just with the RESOLUTION information added. The format of image annotation file is as follows. ``` [ { "path": "00168/001680102.jpg", "cap": [ "xxxxx." ], "resolution": { "height": 512, "width": 683 } }, ... ] ``` # 6M HQ Panda70m The format of video annotation file is as follows. Each element's path follows the structure: `part_x/youtube_id/youtube_id_segment_i.mp4`. Here, `part_x` is our custom organizational folder, which can be customed according to your download path. The `youtube_id` and `segment_i` can be obtained from the [original annotation file](https://github.com/snap-research/Panda-70M/tree/main/dataset_dataloading). ``` [ { "path": "panda70m_part_5565/qLqjjDhhD5Q/qLqjjDhhD5Q_segment_0.mp4", "cap": [ "A man and a woman are sitting down on a news anchor talking to each other." ], "resolution": { "height": 720, "width": 1280 }, "fps": 29.97002997002997, "duration": 11.444767 }, ... ] ``` # 100k HQ data The original data was obtained from [v1.1.0](https://huggingface.co/datasets/LanguageBind/Open-Sora-Plan-v1.1.0/tree/main). We reorganized captions.