Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual Perception
Abstract
Mobile device agent based on Multimodal Large Language Models (MLLM) is becoming a popular application. In this paper, we introduce Mobile-Agent, an autonomous multi-modal mobile device agent. Mobile-Agent first leverages visual perception tools to accurately identify and locate both the visual and textual elements within the app's front-end interface. Based on the perceived vision context, it then autonomously plans and decomposes the complex operation task, and navigates the mobile Apps through operations step by step. Different from previous solutions that rely on XML files of Apps or mobile system metadata, Mobile-Agent allows for greater adaptability across diverse mobile operating environments in a vision-centric way, thereby eliminating the necessity for system-specific customizations. To assess the performance of Mobile-Agent, we introduced Mobile-Eval, a benchmark for evaluating mobile device operations. Based on Mobile-Eval, we conducted a comprehensive evaluation of Mobile-Agent. The experimental results indicate that Mobile-Agent achieved remarkable accuracy and completion rates. Even with challenging instructions, such as multi-app operations, Mobile-Agent can still complete the requirements. Code and model will be open-sourced at https://github.com/X-PLUG/MobileAgent.
Community
Does this work on a real phone or is this a simulator?
AFAIK, you would need platform level (iOS, Android) level permissions to operate a phone like that.
Just thinking that it would be cool to have an app that would allow to this....
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AppAgent: Multimodal Agents as Smartphone Users (2023)
- WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models (2024)
- MobileAgent: enhancing mobile control via human-machine interaction and SOP integration (2024)
- SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents (2024)
- VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Mobile-Agent: Revolutionizing Mobile Devices with Visual Perception
Links ๐:
๐ Subscribe: https://www.youtube.com/@Arxflix
๐ Twitter: https://x.com/arxflix
๐ LMNT (Partner): https://lmnt.com/
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper