# InternVideo: Video Foundation Models for Multimodal Understanding ---
internvideo2_performance.
This repo contains InternVideo series and related works in video foundation models. - [InternVideo](InternVideo1): general video foundation models via generative and discriminative learning - [InternVideo2](InternVideo2): scaling video foundation models for multimodal video understanding - [InternVid](Data/InternVid): a large-scale video-text dataset for multimodal understanding and generation ## Updates - `2024.03`: The [technical report](https://arxiv.org/abs/2403.15377) of InternVideo2 is released. - `2024.01`: [InternVid](Data/InternVid) (a video-text dataset for video understanding and generation) has been accepted for spotlight presentation of ICLR 2024. - `2023.07`: A **video-text dataset InternVid** is released at [here](Data/InternVid) for facilitating multimodal understanding and generation. - `2023.05`: **Video instruction data** are released at [here](Data/instruction_data) for tuning end-to-end video-centric multimodal dialogue systems like [VideoChat](https://github.com/OpenGVLab/Ask-Anything). - `2023.01`: The [code & models](InternVideo1) of InternVideo are released. - `2022.12`: The [technical report](https://arxiv.org/pdf/2212.03191.pdf) of InternVideo is released. - `2022.09`: Press releases of InternVideo ([official](https://www.shlab.org.cn/news/5443279) | [163 news](https://www.163.com/dy/article/HG939TNR0530QRMB.html) | [qq news](https://new.qq.com/rain/a/20220902A053JP00)). ## Contact - If you have any questions during the trial, running or deployment, feel free to join our WeChat group discussion! If you have any ideas or suggestions for the project, you are also welcome to join our WeChat group discussion!
wechatgroup
- We are hiring researchers, engineers and interns in General Vision Group, Shanghai AI Lab. If you are interested in working with us on video foundation models and related topics, please contact Yi Wang (wangyi@pjlab.org.cn).