Papers
arxiv:2503.01378

CognitiveDrone: A VLA Model and Evaluation Benchmark for Real-Time Cognitive Task Solving and Reasoning in UAVs

Published on Mar 3
· Submitted by vansin on Mar 6
Authors:
,
,
,
,

Abstract

This paper introduces CognitiveDrone, a novel Vision-Language-Action (VLA) model tailored for complex Unmanned Aerial Vehicles (UAVs) tasks that demand advanced cognitive abilities. Trained on a dataset comprising over 8,000 simulated flight trajectories across three key categories-Human Recognition, Symbol Understanding, and Reasoning-the model generates real-time 4D action commands based on first-person visual inputs and textual instructions. To further enhance performance in intricate scenarios, we propose CognitiveDrone-R1, which integrates an additional Vision-Language Model (VLM) reasoning module to simplify task directives prior to high-frequency control. Experimental evaluations using our open-source benchmark, CognitiveDroneBench, reveal that while a racing-oriented model (RaceVLA) achieves an overall success rate of 31.3%, the base CognitiveDrone model reaches 59.6%, and CognitiveDrone-R1 attains a success rate of 77.2%. These results demonstrate improvements of up to 30% in critical cognitive tasks, underscoring the effectiveness of incorporating advanced reasoning capabilities into UAV control systems. Our contributions include the development of a state-of-the-art VLA model for UAV control and the introduction of the first dedicated benchmark for assessing cognitive tasks in drone operations. The complete repository is available at cognitivedrone.github.io

Community

Paper submitter

This paper introduces CognitiveDrone, a novel Vision-Language-Action (VLA)
model tailored for complex Unmanned Aerial Vehicles (UAVs) tasks that demand
advanced cognitive abilities. Trained on a dataset comprising over 8,000
simulated flight trajectories across three key categories-Human Recognition,
Symbol Understanding, and Reasoning-the model generates real-time 4D action
commands based on first-person visual inputs and textual instructions. To
further enhance performance in intricate scenarios, we propose
CognitiveDrone-R1, which integrates an additional Vision-Language Model (VLM)
reasoning module to simplify task directives prior to high-frequency control.
Experimental evaluations using our open-source benchmark, CognitiveDroneBench,
reveal that while a racing-oriented model (RaceVLA) achieves an overall success
rate of 31.3%, the base CognitiveDrone model reaches 59.6%, and
CognitiveDrone-R1 attains a success rate of 77.2%. These results demonstrate
improvements of up to 30% in critical cognitive tasks, underscoring the
effectiveness of incorporating advanced reasoning capabilities into UAV control
systems. Our contributions include the development of a state-of-the-art VLA
model for UAV control and the introduction of the first dedicated benchmark for
assessing cognitive tasks in drone operations. The complete repository is
available at cognitivedrone.github.io

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.01378 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.01378 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.01378 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.