Vim-small-midclstok / README.md
LegendBC's picture
Update README.md
babc444 verified
metadata
license: apache-2.0

Vim Model Card

Model Details

Vision Mamba (Vim) is a generic backbone trained on the ImageNet-1K dataset for vision tasks.

  • Developed by: HUST, Horizon Robotics, BAAI
  • Model type: A generic vision backbone based on the bidirectional state space model (SSM) architecture.
  • License: Non-commercial license

Model Sources

Uses

The primary use of Vim is research on vision tasks, e.g., classification, segmentation, detection, and instance segmentation, with an SSM-based backbone. The primary intended users of the model are researchers and hobbyists in computer vision, machine learning, and artificial intelligence.

How to Get Started with the Model

Training Details

Vim is pretrained on ImageNet-1K with classification supervision. The training data is around 1.3M images from ImageNet-1K dataset. See more details in this paper.

Evaluation

Vim-small is evaluated on ImageNet-1K val set, and achieves 80.5% Top-1 Acc. By further finetuning at finer granularity, Vim-small achieves 81.6% Top-1 Acc. See more details in this paper.

Additional Information

Citation Information

 @article{vim,
  title={Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model},
  author={Lianghui Zhu and Bencheng Liao and Qian Zhang and Xinlong Wang and Wenyu Liu and Xinggang Wang},
  journal={arXiv preprint arXiv:2401.09417},
  year={2024}
}