TheoremExplainAgent: Towards Multimodal Explanations for LLM Theorem Understanding Paper • 2502.19400 • Published Feb 26 • 45
Embodied Red Teaming for Auditing Robotic Foundation Models Paper • 2411.18676 • Published Nov 27, 2024 • 1
Towards Data-Efficient Pretraining for Atomic Property Prediction Paper • 2502.11085 • Published Feb 16 • 3
Intuitive physics understanding emerges from self-supervised pretraining on natural videos Paper • 2502.11831 • Published Feb 17 • 18
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training Paper • 2502.11196 • Published Feb 16 • 22
Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention Paper • 2502.11089 • Published Feb 16 • 150
view article Article π0 and π0-FAST: Vision-Language-Action Models for General Robot Control Feb 4 • 133