Shikra_V / run_vtimellm.md
gospacex's picture
do not write image features to the disk
3437928

run example inference

python -m vtimellm.inference --model_base lmsys/vicuna-7b-v1.5
--pretrain_mm_mlp_adapter checkpoints/vicuna-7b-v1.5/vtimellm-vicuna-v1-5-7b/vtimellm-vicuna-v1-5-7b-stage1/mm_projector.bin
--stage2 checkpoints/vicuna-7b-v1.5/vtimellm-vicuna-v1-5-7b/vtimellm-vicuna-v1-5-7b-stage2
--stage3 checkpoints/vicuna-7b-v1.5/vtimellm-vicuna-v1-5-7b/vtimellm-vicuna-v1-5-7b-stage3

python demo_gradio.py --model_base lmsys/vicuna-7b-v1.5
--pretrain_mm_mlp_adapter ../checkpoints/vicuna-7b-v1.5/vtimellm-vicuna-v1-5-7b/vtimellm-vicuna-v1-5-7b-stage1/mm_projector.bin
--stage2 ../checkpoints/vicuna-7b-v1.5/vtimellm-vicuna-v1-5-7b/vtimellm-vicuna-v1-5-7b-stage2
--stage3 ../checkpoints/vicuna-7b-v1.5/vtimellm-vicuna-v1-5-7b/vtimellm-vicuna-v1-5-7b-stage3

port forwarding

ssh -t -t -i /home/datasets/xitong_id_rsa xitong@newton.ist.ucf.edu -L 7860:localhost:7860 ssh evc23 -L 7860:localhost:7860

ssh -t -t -i /home/datasets/xitong_id_rsa xitong@newton.ist.ucf.edu -L 12345:localhost:7860 ssh evc23 -L 7860:localhost:12345

generate training datasets

python Shikra_V/VidSTG/read_annotation_multithread.py --vidstg VidSTG-Dataset/annotations/train_annotations.json --output vidstg_train.json

generate validation datasets

# vidstg validation still use vidor training data. it is default dataset. so no need to specify
python Shikra_V/VidSTG/read_annotation_multithread.py --vidstg VidSTG-Dataset/annotations/val_annotations.json --output vidstg_val.json

generate test datasets

# vidstg test use vidor validation data.
python Shikra_V/VidSTG/read_annotation_multithread.py --vidstg VidSTG-Dataset/annotations/test_annotations.json --vidor_anno_path_base vidor/validation_annotation/validation --vidor_path_base vidor/validation/video --output vidstg_test.json 

Calculate the iou by using test datasets

python vtimellm/eval/eval.py --stage3 checkpoints/vtimellm-vicuna-v1-5-7b-stage3_all_bbox_freeze_mlp_adaper/checkpoint-1700 --data_path data/xl/vidstg_test.json --feat_folder data/xl/stage4_features_test --log_path vtimellm/eval/log/iou.txt --task iou

Verify by using my trained stage2

python demo_gradio.py --model_base lmsys/vicuna-7b-v1.5 \
--pretrain_mm_mlp_adapter ../checkpoints/vtimellm-vicuna-v1-5-7b-stage1/mm_projector.bin \
--stage2 ../checkpoints/vtimellm-vicuna-v1-5-7b-stage2_xl \
--stage3 ../checkpoints/vtimellm-vicuna-v1-5-7b-stage3

verify by using my trained stage3

python demo_gradio.py --model_base lmsys/vicuna-7b-v1.5 \
--pretrain_mm_mlp_adapter ../checkpoints/vtimellm-vicuna-v1-5-7b-stage1/mm_projector.bin \
--stage2 ../checkpoints/vtimellm-vicuna-v1-5-7b-stage2 \
--stage3 ../checkpoints/vtimellm-vicuna-v1-5-7b-stage3_xl

status

we have generated 44087 training samples, 4892 validation samples and 5655 test samples