Apollo: An Exploration of Video Understanding in Large Multimodal Models

Apollo is a family of Large Multimodal Models (LMMs) that push the state-of-the-art in video understanding. It supports tasks including:

  • Long-form video comprehension
  • Temporal reasoning
  • Complex video question-answering
  • Multi-turn conversations grounded in video content

Apollo models excel at handling hour-long videos, balancing speed and accuracy through strategic design decisions. Our models outperform most 7B competitors at just 3B parameters and even rival 30B-scale models.

Key Highlights:

  • Scaling Consistency: Design decisions validated on smaller models and datasets effectively transfer to larger scales, reducing computation and experimentation costs.
  • Efficient Video Sampling: fps sampling and advanced token resampling strategies (e.g., Perceiver) yield stronger temporal perception.
  • Encoder Synergies: Combining SigLIP-SO400M (image) with InternVideo2 (video) delivers a robust representation, outperforming single encoders on temporal tasks.
  • ApolloBench: A streamlined evaluation benchmark (41x faster) that focuses on true video understanding capabilities.

Quick Start

Installation:

pip install -e .
pip install flash-attn --no-build-isolation

Inference Example:

import torch
from transformers import AutoModelForCausalLM
from apollo.mm_utils import (
    KeywordsStoppingCriteria,
    tokenizer_mm_token,
    ApolloMMLoader
)
from apollo.conversation import conv_templates, SeparatorStyle
from huggingface_hub import snapshot_download

model_url = "Apollo-LMMs/Apollo-3B-t32"
model_path = snapshot_download(model_url, repo_type="model")

device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    trust_remote_code=True,
    low_cpu_mem_usage=True
).to(device=device, dtype=torch.bfloat16)

tokenizer = model.tokenizer
vision_processors = model.vision_tower.vision_processor
config = model.config
num_repeat_token = config.mm_connector_cfg['num_output_tokens']
mm_processor = ApolloMMLoader(
    vision_processors,
    config.clip_duration,
    frames_per_clip=4,
    clip_sampling_ratio=0.65,
    model_max_length=config.model_max_length,
    device=device,
    num_repeat_token=num_repeat_token
)

video_path = "path/to/video.mp4"
question = "Describe this video in detail"
mm_data, replace_string = mm_processor.load_video(video_path)

conv = conv_templates["qwen_2"].copy()
conv.append_message(conv.roles[0], replace_string + "\n\n" + question)
conv.append_message(conv.roles[1], None)

prompt = conv.get_prompt()
input_ids = tokenizer_mm_token(prompt, tokenizer, return_tensors="pt").unsqueeze(0).to(device)

stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
stopping_criteria = KeywordsStoppingCriteria([stop_str], tokenizer, input_ids)

with torch.inference_mode():
    output_ids = model.generate(
        input_ids,
        vision_input=[mm_data],
        data_types=['video'],
        do_sample=True,
        temperature=0.4,
        max_new_tokens=256,
        top_p=0.7,
        use_cache=True,
        num_beams=1,
        stopping_criteria=[stopping_criteria]
    )

pred = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
print(pred)

Citation

If you find this project useful, please consider citing:

@article{zohar2024apollo,
    title={Apollo: An Exploration of Video Understanding in Large Multimodal Models},
    author={Zohar, Orr and Wang, Xiaohan and Dubois, Yann and Mehta, Nikhil and Xiao, Tong and Hansen-Estruch, Philippe and Yu, Licheng and Wang, Xiaofang and Juefei-Xu, Felix and Zhang, Ning and Yeung-Levy, Serena and Xia, Xide},
    journal={arXiv preprint arXiv:2412.10360},
    year={2024}
}

For more details, visit the project website or check out the paper.

Downloads last month
61
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.