Edit model card

This repository contains the models described in the following paper:

Orhan AE (2024) HVM-1: Large-scale video models pretrained with nearly 5000 hours of human-like video data. arXiv:2407.18067.

These models were pretrained with the spatiotemporal MAE algorithm on ~5k hours of curated human-like video data (mostly egocentric, temporally extended, continuous video recordings) and then, optionally, finetuned on various downstream tasks with few-shot supervised training.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .