UGround
UGround: Universal GUI Visual Grounding for GUI Agents
Image-Text-to-Text • Updated • 376 • 5Note Based on Qwen2-VL-2B-Instruct
osunlp/UGround-V1-7B
Image-Text-to-Text • Updated • 416 • 2Note Based on Qwen2-VL-7B-Instruct
osunlp/UGround-V1-72B
Image-Text-to-Text • UpdatedNote Based on Qwen2-VL-72B-Instruct. Full training without LoRA.
osunlp/UGround-V1-72B-Preview
Image-Text-to-Text • Updated • 276Note Based on Qwen2-VL-72B-Instruct. Trained with LoRA.
osunlp/UGround
Image-Text-to-Text • Updated • 6.67k • 19Note The initial model. Based on the modified LLaVA arch (CLIP + Vicuna-7B) describe in the paper
Navigating the Digital World as Humans Do: Universal Visual Grounding for GUI Agents
Paper • 2410.05243 • Published • 18Note Low-cost, scalable and effective data synthesis pipeline for GUI visaul grounding; SOTA GUI visual grounding model UGround; purely vision-only (modular) GUI agent framework SeeAct-V; first time demonstrating SOTA performance of vision-only GUI agents.
Paused14📱💻🌐UGround
Note Paused. Will open a new one for Qwen2-VL-based UGround
Paused1📱💻🌐UGround-V1-2B
Note Paused. Trying to figure out how to accelerate the inference. And will open a new one for UGround-V1.1