Datasets:
where can I find the dpo video?
Nice work. I found that the videos provided here do not match the video IDs in the DPO data. So, where can I download the DPO videos? Thank you.
Hello,
Thanks for the comment. This dataset is for testing.
For DPO training, you need to go to https://huggingface.co/datasets/ShareGPTVideo/train_video_and_instruction/tree/main/train_300k
We release video data as sequence of frames sampled with fps=1.
Let me know if you couldn't find them or encounter any problem during training.
Hello,
Thanks for the comment. This dataset is for testing.
For DPO training, you need to go to https://huggingface.co/datasets/ShareGPTVideo/train_video_and_instruction/tree/main/train_300k
We release video data as sequence of frames sampled with fps=1.Let me know if you couldn't find them or encounter any problem during training.
Does it mean that if I only want to experience the DPO phase, I still have to download the entire train_300k, and how many folders of images are actually extracted (I have already extracted 380k, but it hasn't stopped yet)?
Hi Xilin,
The dpo data is about 17k, so only a subset of videos are used. However, I didn't pick the subset from the 300k, so you still need to download the entire 300k, probably <= 150G. If you follow the set up script in the readme page:
git clone git@github.com:RifleZhang/LLaVA-Hound-DPO.git
# fill in requirement path and token at: https://github.com/RifleZhang/LLaVA-Hound-DPO/blob/main/setup/set_path.sh
source setup/setup_env.sh
source setup/setup_train_data.sh
That should do the extraction automatically.