File size: 2,860 Bytes
82c0673
 
23e0154
 
 
 
77c0f59
 
 
 
 
 
 
82c0673
77c0f59
c989494
3437928
 
 
 
 
 
 
3ba0774
 
8a06826
7eda0ef
8a06826
 
 
 
 
7eda0ef
3ba0774
c989494
82c0673
 
3437928
82c0673
 
 
c989494
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a06826
 
 
 
82c0673
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63

### run example inference
python -m vtimellm.inference --model_base lmsys/vicuna-7b-v1.5 \
--pretrain_mm_mlp_adapter checkpoints/vicuna-7b-v1.5/vtimellm-vicuna-v1-5-7b/vtimellm-vicuna-v1-5-7b-stage1/mm_projector.bin \
--stage2 checkpoints/vicuna-7b-v1.5/vtimellm-vicuna-v1-5-7b/vtimellm-vicuna-v1-5-7b-stage2 \
--stage3 checkpoints/vicuna-7b-v1.5/vtimellm-vicuna-v1-5-7b/vtimellm-vicuna-v1-5-7b-stage3


python demo_gradio.py --model_base lmsys/vicuna-7b-v1.5 \
--pretrain_mm_mlp_adapter ../checkpoints/vicuna-7b-v1.5/vtimellm-vicuna-v1-5-7b/vtimellm-vicuna-v1-5-7b-stage1/mm_projector.bin \
--stage2 ../checkpoints/vicuna-7b-v1.5/vtimellm-vicuna-v1-5-7b/vtimellm-vicuna-v1-5-7b-stage2 \
--stage3 ../checkpoints/vicuna-7b-v1.5/vtimellm-vicuna-v1-5-7b/vtimellm-vicuna-v1-5-7b-stage3

### port forwarding
ssh -t -t -i /home/datasets/xitong_id_rsa xitong@newton.ist.ucf.edu -L 7860:localhost:7860 ssh evc23 -L 7860:localhost:7860

ssh -t -t -i /home/datasets/xitong_id_rsa xitong@newton.ist.ucf.edu -L 12345:localhost:7860 ssh evc23 -L 7860:localhost:12345

### generate training datasets
```
python Shikra_V/VidSTG/read_annotation_multithread.py --vidstg VidSTG-Dataset/annotations/train_annotations.json --output vidstg_train.json
```

### generate validation datasets
```
# vidstg validation still use vidor training data. it is default dataset. so no need to specify
python Shikra_V/VidSTG/read_annotation_multithread.py --vidstg VidSTG-Dataset/annotations/val_annotations.json --output vidstg_val.json
```

### generate test datasets
```
# vidstg test use vidor validation data.
python Shikra_V/VidSTG/read_annotation_multithread.py --vidstg VidSTG-Dataset/annotations/test_annotations.json --vidor_anno_path_base vidor/validation_annotation/validation --vidor_path_base vidor/validation/video --output vidstg_test.json 
```

### Calculate the iou by using test datasets
```
python vtimellm/eval/eval.py --stage3 checkpoints/vtimellm-vicuna-v1-5-7b-stage3_all_bbox_freeze_mlp_adaper/checkpoint-1700 --data_path data/xl/vidstg_test.json --feat_folder data/xl/stage4_features_test --log_path vtimellm/eval/log/iou.txt --task iou

```

### Verify by using my trained stage2

```
python demo_gradio.py --model_base lmsys/vicuna-7b-v1.5 \
--pretrain_mm_mlp_adapter ../checkpoints/vtimellm-vicuna-v1-5-7b-stage1/mm_projector.bin \
--stage2 ../checkpoints/vtimellm-vicuna-v1-5-7b-stage2_xl \
--stage3 ../checkpoints/vtimellm-vicuna-v1-5-7b-stage3
```

### verify by using my trained stage3

```
python demo_gradio.py --model_base lmsys/vicuna-7b-v1.5 \
--pretrain_mm_mlp_adapter ../checkpoints/vtimellm-vicuna-v1-5-7b-stage1/mm_projector.bin \
--stage2 ../checkpoints/vtimellm-vicuna-v1-5-7b-stage2 \
--stage3 ../checkpoints/vtimellm-vicuna-v1-5-7b-stage3_xl
```

### status

we have generated 44087 training samples, 4892 validation samples and 5655 test samples