Papers
arxiv:2506.01844

SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics

Published on Jun 2
Β· Submitted by andito on Jun 3
#2 Paper of the day

Abstract

SmolVLA is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.

AI-generated summary

Vision-language models (VLMs) pretrained on large-scale multimodal datasets encode rich visual and linguistic knowledge, making them a strong foundation for robotics. Rather than training robotic policies from scratch, recent approaches adapt VLMs into vision-language-action (VLA) models that enable natural language-driven perception and control. However, existing VLAs are typically massive--often with billions of parameters--leading to high training costs and limited real-world deployability. Moreover, they rely on academic and industrial datasets, overlooking the growing availability of community-collected data from affordable robotic platforms. In this work, we present SmolVLA, a small, efficient, and community-driven VLA that drastically reduces both training and inference costs, while retaining competitive performance. SmolVLA is designed to be trained on a single GPU and deployed on consumer-grade GPUs or even CPUs. To further improve responsiveness, we introduce an asynchronous inference stack decoupling perception and action prediction from action execution, allowing higher control rates with chunked action generation. Despite its compact size, SmolVLA achieves performance comparable to VLAs that are 10x larger. We evaluate SmolVLA on a range of both simulated as well as real-world robotic benchmarks and release all code, pretrained models, and training data.

Community

Paper author Paper submitter

SmolVLA is a small, efficient, and community-driven VLA that drastically reduces both training and inference costs, while retaining competitive performance.
Authors will be around so let's talk!

Β·

Wowow this is super cool! (sorry for low info comment)

Great read. Section 3 is a goldmine of its own.

Β·
Paper author

πŸ₯° thank you so much! πŸ€—

The paper states that model is trained on 4 GPU, corresponding to 30k gpu hours but it is equivalent as 30k/24/4=312 days. Is the number correct?

Β·

I asked the author the same question.
it's project's sum, which accounts for 100+ models trained due to architecture tweaking, hyperparameter tuning, ablations, and ofc testing.

Especially love the async inference contributions here. After trying to run Gr00t on a cloud GPU a few weeks back and experiencing the network latencies significantly impacting performance, I really appreciate the idea of parallelising inference with action execution.

I hope we see other VLAs adopting this architecture, it feels like a key step toward robots sharing cloud GPUs rather than depending on local hardware (reducing marginal cost & increasing maintainability!).

Β·
Paper author

Hey @willnorris thank you so much for your words---we're glad you liked the report, and async inference πŸ˜‰
We're hard at work to make sure the stack lands on main soon. It's already compatible with all the policy types LeRobot supports, and open-sourcing everything is our effort to make this the standard paradigm for the community. Why lagging? πŸ€“

If you're interested in following progress, check the PR here πŸ”— https://github.com/huggingface/lerobot/pull/1196

Paper author

Hey @willnorris thank you so much for your words---we're glad you liked the report, and async inference πŸ˜‰
We're hard at work to make sure the stack lands on main soon. It's already compatible with all the policy types LeRobot supports, and open-sourcing everything is our effort to make this the standard paradigm for the community. Why lagging? πŸ€“

If you're interested in following progress, check the PR here πŸ”— https://github.com/huggingface/lerobot/pull/1196

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Is only SO-100 execution data included in the dataset for pretraining?

Β·

yes, correct!

Hey guys,
Amazing work!

I think there is a typo: in "Improving Task Annotations" section, the link ref to Qwen/Qwen2.5-3B-Instruct (should be Qwen/Qwen2.5-VL-3B-Instruct)

Β·

Hello! Thank you!

Great catch! Yes, you are right the correct link ref is Qwen/Qwen2.5-VL-3B-Instruct.
Sorry about this!

Hi SmolVLA team,

Awesome work! Really cool how a small dataset of diverse community data can make such a difference.

I was especially interested in your data curation process. From the paper, I saw that you used a VLM for annotations and mapped views by hand.

  • How scalable did you find this hybrid approach?
  • Were there any recurring pain points or bottlenecks during curation?

Also, generally speaking would you say curation is a major bottleneck/time sink when developing these models? I've been looking at the ARES project and was thinking of maybe forking it and writing a better front-end/ back-end stack and deploying it as a space, so we can improve all HF datasets on the hub.

Thanks again for your awesome work.

I had a question regarding the asynchronous inference process. I’m relatively new to this area, so apologies in advance if this is a naive doubt.
From what I understand, the method allows the next inference cycle to begin while the action chunk from the previous inference is still being executed. Wouldn’t this introduce a mismatch in some casesβ€”where the system’s state has evolved significantly during the execution of the previous chunk, making the observation used for the next inference outdated or stale? In such situations, wouldn’t the resulting actions be suboptimal or even incorrect?
Please correct me if I’ve misunderstood something.
Thanks!

Β·

Hey @aadarshram πŸ‘‹ Thank you very much for your question! Indeed your observation is spot on---if the environment evolves significantly while the next action is being predicted, then the actions planned might be arbitrarly suboptimal (or even incorrect). However, models outputting "action chunks" (which are executed open-loop) natively deal with this problem, to which to your point our asynchronous inference stack might be more prone.

I think it's worth noting we did not find such instances of "high confusion" failure modes in practice, and that aggregating (and not overriding, f(A_1, A_2) = A_2) different chunks provides a good mechanism to overcome this problem.

Sign up or log in to comment

Models citing this paper 145

Browse 145 models citing this paper

Datasets citing this paper 5

Browse 5 datasets citing this paper

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.01844 in a Space README.md to link it from this page.

Collections including this paper 21