GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmentation

[Project Page] [arXiv] [GitHub]

arXiv Project

Overview

RefVOS in complex scenarios places high demands on models' video understanding and fine-grained localization capabilities. Recently, numerous models leveraging MLLM-based comprehension and reasoning abilities have been proposed to address this challenge. Our GLUS advances further along this methodological path.

πŸš€ GLUS is principled. It utilizes global-local reasoning to combine holistic video understanding with detailed frames understanding, unleashing the potential of fine-grained segmentation in complex scenarios.

✨ GLUS is powerful. It unifies the methods of memory bank, object contrastive learning and key frame selection to tackle the problems of mask inconsistency and object obfuscation, achieving state-of-the-art performance in complex-scenario RefVOS tasks.

πŸ“Œ GLUS is simple. It elegantly integrates the approach for complex-scenario RefVOS tasks within a single MLLM framework, eliminating the necessity of utilizing other independent modules.

Installation

git clone git@github.com:GLUS-video/GLUS.git && cd GLUS
pip install -r requirements.txt
pip install ./model/segment-anything-2
pip install flash-attn==2.6.2 --no-build-isolation

Model Zoo

For more convenient following, we provide the checkpoints of GLUS without object contrastive learning.

Model Training Datasets Methods Download MeViS J&F Ref-Youtube-VOS J&F
GLUSSpartial MeViS, Ref-Youtube-VOS GLU + MB HuggingFace, ModelScope 49.5 65.2
GLUSS MeViS, Ref-Youtube-VOS GLU + MB + OC + KFS HuggingFace, ModelScope 50.3 66.6
GLUSA + RefDAVIS17, ReVOS, LVVIS GLU + MB HuggingFace, ModelScope 51.3 67.3

Notes: β€œGLU”: Global-local unification, β€œMB”: End-to-end memory bank, β€œOC”: Object contrastive loss, β€œKFS”: key frame selection. GLUSS refers to the model trained on a subset of existing RefVOS datasets (Mevis and Ref-Youtube-VOS), while GLUSA denotes the model trained on the full set of available datasets.

We recommend to download and store the pretrained weights at GLUS_ROOT/checkpoints.

Training and Validation

1. Data Preparation

Please follow the below architecture to prepare the datasets. We recommend to set DATASET_ROOT to GLUS_ROOT/data.

  1. RefVOS Datasets: MeViS, Refer-YouTube-VOS, Ref-DAVIS17.
  2. Reasoning VOS Datasets: ReVOS, ReasonVOS
  3. Open-Vocabulary Video Instance Segmentation Dataset: LV-VIS.
Datasets Architecture
DATASET_ROOT
β”œβ”€β”€ mevis
β”‚   β”œβ”€β”€ train
β”‚   β”‚   β”œβ”€β”€ JPEGImages
β”‚   β”‚   β”œβ”€β”€ mask_dict.json
β”‚   β”‚   └── meta_expressions.json
β”‚   β”œβ”€β”€ valid
β”‚   β”‚   β”œβ”€β”€ JPEGImages
β”‚   β”‚   └── meta_expressions.json
β”‚   └── valid_u
β”‚       β”œβ”€β”€ JPEGImages
β”‚       β”œβ”€β”€ mask_dict.json
β”‚       └── meta_expressions.json
β”œβ”€β”€ Refer-YouTube-VOS
β”‚   β”œβ”€β”€ meta_expressions
β”‚   β”‚   β”œβ”€β”€ train/meta_expressions.json
β”‚   β”‚   └── valid/meta_expressions.json
β”‚   β”œβ”€β”€ train
β”‚   β”‚   β”œβ”€β”€ JPEGImages
β”‚   β”‚   └── Annotations
β”‚   └── valid
β”‚       └── JPEGImages
β”œβ”€β”€ DAVIS17
β”‚   β”œβ”€β”€ meta_expressions
β”‚   β”‚   β”œβ”€β”€ train/meta_expressions.json
β”‚   β”‚   └── valid/meta_expressions.json
β”‚   β”œβ”€β”€ train
β”‚   β”‚   β”œβ”€β”€ JPEGImages
β”‚   β”‚   └── Annotations
β”‚   └── valid
β”‚       β”œβ”€β”€ JPEGImages
β”‚       └── Annotations
β”œβ”€β”€ LVVIS
β”‚   β”œβ”€β”€ train
β”‚   β”‚   └── JPEGImages
β”‚   β”œβ”€β”€ mask_dict.json
β”‚   └── meta_expressions.json
β”œβ”€β”€ ReVOS
β”‚   β”œβ”€β”€ JPEGImages 
β”‚   β”œβ”€β”€ mask_dict.json             
β”‚   β”œβ”€β”€ mask_dict_foreground.json   
β”‚   β”œβ”€β”€ meta_expressions_train_.json 
β”‚   └── meta_expressions_valid_.json 
β”œβ”€β”€ ReasonVOS
β”‚   β”œβ”€β”€ JPEGImages 
β”‚   β”œβ”€β”€ Annotations           
β”‚   β”œβ”€β”€ meta_expressions.json 

2. Model Weights Preparation

Follow the guidance to prepare for the pretrained weights of LISA and SAM-2 for training GLUS:

  1. Download the pretrained weights of LISA from LISA-7B-v1.
  2. Download the pretrained weights of SAM-2 from sam2_hiera_large.
Then organize them in the following architecture:
WEIGHTS_ROOT
β”œβ”€β”€ LISA-7B-v1
└── sam2_hiera_large.pt

We recommend to set WEIGHTS_ROOT to GLUS_ROOT/checkpoints.

3. Training

Set the paths in the scripts and then run scripts/train_glus_s.sh or scripts/train_glus_a.sh. The scripts will automatically start the training, and transform the saved checkpoint into hugging-face format when the training finished.

Key Frame Selection

For the usage of key frame selection, please refer to the KFS_README.

4. Evaluation

Set the paths, val_set and set_name in scripts/inference.sh, and then run it. It will detect the available GPUs firstly and then individually run parallelizable inference on each gpu.

Evaluation with Key Frame Selection

Set the args use_kf and kf_path in scripts/inference_kf.sh, and then run it. We provide our json file on Mevis and Refyoutube-VOS for GLUSS on the google drive.

After the masks are generated completely, run the corresponding evalaution python file in utils. You may need to set the groundtruth mask path, predicted mask path and expressions json file path. Please refer to the eval files to see the help on arguments.

An example:

python utils/eval_mevis.py \\
  --mevis_exp_path=\'$GLUS_ROOT/data/mevis/valid_u/meta_expressions.json\' \\
  --mevis_mask_path=\'$GLUS_ROOT/data/mevis/valid_u/mask_dict.json\'
  --mevis_pred_path=\'$GLUS_ROOT/generated\'

Specially, to evaluate the performance on Refer-YouTube-VOS Valid or MeViS Valid benchmarks, you may need to submit the predicted masks results following the guidance at MeViS-Evaluation-Server or RefYoutube-Evaluation-Server.

Inference and Demo

Please refer to demo.ipynb to inference on your own videos and referrings.

For more examples, please refer to our Project Page.

Citation

If you find this work useful in your research, please consider citing:

@inproceedings{lin2025glus,
  title={GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmentation},
  author={Lin, Lang and Yu, Xueyang and Pang, Ziqi and Wang, Yu-Xiong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2025}
}

Acknowledgement

We thank the contributors to the following open-source projects. Our project is impossible without the inspirations from these excellent researchers.

Downloads last month
115
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support