Papers
arxiv:2502.17092

Shakti-VLMs: Scalable Vision-Language Models for Enterprise AI

Published on Feb 24
· Submitted by SyedAbdul on Feb 26

Abstract

We introduce Shakti VLM, a family of vision-language models in the capacity of 1B and 4B parameters designed to address data efficiency challenges in multimodal learning. While recent VLMs achieve strong performance through extensive training data, Shakti models leverage architectural innovations to attain competitive results with fewer tokens. Key advancements include QK-Normalization for attention stability, hybrid normalization techniques, and enhanced positional encoding. A three-stage training strategy further optimizes learning efficiency. Evaluations show that Shakti-Shakti-VLM-1B and Shakti-VLM-4B excel in document understanding, Visual Reasoning, OCR extraction, and general multimodal reasoning. Our results highlight that high performance can be achieved through model design and training strategy rather than sheer data volume, making Shakti an efficient solution for enterprise-scale multimodal tasks.

Community

Paper author Paper submitter

We introduce Shakti VLM, a family of vision-language models in the capacity of 1B and 4B
parameters designed to address data efficiency challenges in multimodal learning. While recent
VLMs achieve strong performance through extensive training data, Shakti models leverage architectural innovations to attain competitive results with fewer tokens. Key advancements include
QK-Normalization for attention stability, hybrid normalization techniques, and enhanced positional
encoding. A three-stage training strategy further optimizes learning efficiency. Evaluations show that
Shakti-Shakti-VLM-1B and Shakti-VLM-4B excel in document understanding, Visual Reasoning,
OCR extraction, and general multimodal reasoning. Our results highlight that high performance can
be achieved through model design and training strategy rather than sheer data volume, making Shakti
an efficient solution for enterprise-scale multimodal tasks.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.17092 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.17092 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.17092 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.