juliet shen
juliets
ยท
AI & ML interests
None yet
Recent Activity
liked
a Space
7 days ago
data-is-better-together/image-preferences
liked
a Space
20 days ago
UNESCO/nllb
Reacted to
singhsidhukuldeep's
post
with ๐ฅ
about 1 month ago
Good folks from @Microsoft have released an exciting breakthrough in GUI automation!
OmniParser โ a game-changing approach for pure vision-based GUI agents that works across multiple platforms and applications.
Key technical innovations:
- Custom-trained interactable icon detection model using 67k screenshots from popular websites
- Specialized BLIP-v2 model fine-tuned on 7k icon-description pairs for extracting functional semantics
- Novel combination of icon detection, OCR, and semantic understanding to create structured UI representations
The results are impressive:
- Outperforms GPT-4V baseline by significant margins on the ScreenSpot benchmark
- Achieves 73% accuracy on Mind2Web without requiring HTML data
- Demonstrates a 57.7% success rate on AITW mobile tasks
What makes OmniParser special is its ability to work across platforms (mobile, desktop, web) using only screenshot data โ no HTML or view hierarchy needed. This opens up exciting possibilities for building truly universal GUI automation tools.
The team has open-sourced both the interactable region detection dataset and icon description dataset to accelerate research in this space.
Kudos to the Microsoft Research team for pushing the boundaries of what's possible with pure vision-based GUI understanding!
What are your thoughts on vision-based GUI automation?
Organizations
models
None public yet
datasets
None public yet