--- license: apache-2.0 --- # 👁️ VideoGPT+ (Phi-3-mini-4K 3.8B - Projector Pretrain Weights) --- ## 📝 Description VideoGPT+ integrates image and video encoders to leverage detailed spatial understanding and global temporal context, respectively. It processes videos in segments using adaptive pooling on features from both encoders, enhancing performance across various video benchmarks. **This model contains the pretrained weights of projectors for Image encoder (CLIP L/14) and Video Encoder (InternVideo2).** ## 💻 Download To get started with, follow these steps: ``` git lfs install git clone https://huggingface.co/MBZUAI/VideoGPT-plus_Phi3-mini-4k_Pretrain ``` ## 📚 Additional Resources - **Paper:** [ArXiv](https://arxiv.org/abs/2406.09418). - **GitHub Repository:** For training and updates: [GitHub - GLaMM](https://github.com/mbzuai-oryx/VideoGPT-plus). - **HuggingFace Collection:** For downloading the pretrained checkpoints, VCGBench-Diverse Benchmarks and Training data, visit [HuggingFace Collection - VideoGPT+](https://huggingface.co/collections/MBZUAI/videogpt-665c8643221dda4987a67d8d). ## 📜 Citations and Acknowledgments ```bibtex @article{Maaz2024VideoGPT+, title={VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding}, author={Maaz, Muhammad and Rasheed, Hanoona and Khan, Salman and Khan, Fahad Shahbaz}, journal={arxiv}, year={2024}, url={https://arxiv.org/abs/2406.09418} }