HongxinLi commited on
Commit
05633a4
Β·
verified Β·
1 Parent(s): 890ad1b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +145 -0
README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - microsoft/Florence-2-large
5
+ library_name: transformers
6
+ tags:
7
+ - GUI
8
+ - VLM
9
+ - Agent
10
+ - GUI-Grounding
11
+ ---
12
+
13
+
14
+ # 🎯 GoClick-Large: Super Fast Lightweight GUI Grounding Expert
15
+
16
+
17
+ <div align="center">
18
+
19
+ [![GitHub](https://img.shields.io/badge/GitHub-Repo-black?logo=github)](https://github.com/ZJULiHongxin/GoClick)
20
+ [![Paper](https://img.shields.io/badge/Paper-GoClick-blue?logo=adobeacrobatreader)](https://arxiv.org/abs/2604.23941)
21
+ [![GoClickLarge](https://img.shields.io/badge/πŸ€—%20GoClickLarge-Model-yellow)](https://huggingface.co/HongxinLi/GoClick-Large)
22
+ [![GoClickBase](https://img.shields.io/badge/πŸ€—%20GoClickBase-Model-yellow)](https://huggingface.co/HongxinLi/GoClick-Base)
23
+ [![SFTData](https://img.shields.io/badge/πŸ€—%20SFT-Dataset-yellow)](https://huggingface.co/datasets/HongxinLi/GoClick_Coreset_3814k)
24
+ [![SFTZipData](https://img.shields.io/badge/πŸ€—%20SFTZip-SFTData-yellow)](https://huggingface.co/datasets/HongxinLi/GoClick_sft_data)
25
+
26
+ </div>
27
+
28
+
29
+ GoClick is a state-of-the-art two-stage framework for precise UI element grounding. Built on the Florence-2 architecture, it bridges the gap between high-level intent and low-level pixel coordinates by separating the Planning and Grounding tasks.
30
+
31
+ ## πŸ—οΈ Agent Architecture Overview
32
+
33
+ 1. Stage 1 (Planning): Analyze UI screenshot + Goal -> Output Function Description.
34
+ 2. Stage 2 (Grounding): Screenshot + Function Description -> Output Precise Coordinates.Note: This model is the specialized Stage 2 Grounder, fine-tuned for extreme precision in locating elements based on their described functionality.
35
+
36
+ ## πŸš€ Quick Start (Inference of The Model)
37
+
38
+ Prerequisites
39
+
40
+ ```
41
+ pip install transformers==4.45.0 timm
42
+ ```
43
+
44
+ Note: The version of Transformers should not be too high. Adjust the version if model loading fails.
45
+
46
+ ### Usage Example
47
+
48
+ ```
49
+ from transformers import AutoModelForCausalLM, AutoProcessor
50
+ from PIL import Image
51
+
52
+
53
+ def postprocess(text: str, image_size: tuple[int]):
54
+ """Function that decodes model's generation into action json.
55
+
56
+ Args:
57
+ text: single generated sample
58
+ image_size: corresponding image size
59
+ """
60
+ point_pattern = r"<loc_(\d+)>,<loc_(\d+)>"
61
+
62
+ try:
63
+ location = re.findall(point_pattern, text)[0]
64
+ if len(location) > 0:
65
+ point = [int(loc) for loc in location]
66
+
67
+ except Exception:
68
+ point = (0, 0)
69
+
70
+ return point
71
+
72
+ # Load model and processor
73
+ model = AutoModelForCausalLM.from_pretrained("HongxinLi/GoClick-Base", trust_remote_code=True)
74
+ processor = AutoProcessor.from_pretrained("HongxinLi/GoClick-Base", trust_remote_code=True)
75
+
76
+ # Load UI screenshot
77
+ image = Image.open("ui_screenshot.png")
78
+
79
+ # Stage 1: Planning
80
+
81
+ # Functionality Grounding (For AutoGUI FuncPred Benchmark)
82
+ planning_prompt = f"Locate the element according to its detailed functionality description. {goal_info} (Output the center coordinates of the target)"
83
+
84
+ # Intent Grounding (For RefExp, MOTIF, and VisualWebBench Action Grounding)
85
+ planning_prompt = f"I want to {goal_info}. Please locate the target element I should interact with. (Output the center coordinates of the target)"
86
+
87
+ # Description Grounding (For ScreenSpot/v2 and VisualWebBench Element Grounding))
88
+ planning_prompt = f"Where is the {goal_info} element? (Output the center coordinates of the target)"
89
+
90
+
91
+ inputs = processor(
92
+ images=image,
93
+ text=prompt,
94
+ return_tensors="pt",
95
+ do_resize=True,
96
+ ).to(model.device, dtype=model.dtype)
97
+
98
+ outputs = model.generate(
99
+ **inputs,
100
+ do_sample= False,
101
+ max_new_tokens=max_new_tokens,
102
+ use_cache=True
103
+ )
104
+
105
+ text_output = processor.tokenizer.batch_decode(outputs, skip_special_tokens=False)[0]
106
+ text_output = postprocess(text_output, img_size)
107
+
108
+ ```
109
+
110
+ ### πŸ“Š Benchmarks
111
+
112
+ GoClick-Base also achieves a good tradeoff between GUI element grounding accuracy and inference latency:
113
+
114
+ | Model | Size | TTFT ↓ (ms) | TPOT ↓ (ms/token) | FuncPred (F; M, W) | ScreenSpot (B; M, W, D) | ScreenSpot-v2 (B; M, W, D) | MOTIF (I; M) | RefExp (I; M) | VWB EG (T; W) | VWB AG (I; W) |
115
+ |-------|------|-------------|-------------------|--------------------|-------------------------|---------------------------|--------------|---------------|---------------|---------------|
116
+ | GPT-4o | - | - | - | 9.8 | 17.8 | 20.4 | 30.5 | 21.8 | 5.6 | 6.8 |
117
+ | Qwen2VL-7B | 8B | 118.9 | 21.2 | 38.7 | 66.4 | 66.9 | 75.1 | 64.8 | 55.9 | 62.1 |
118
+ | CogAgent | 18B | 1253.2 | 208.8 | 29.3 | 47.4 | 49.2 | 46.7 | 35.0 | 55.7 | 59.2 |
119
+ | SeeClick | 10B | 160.4 | 184.4 | 19.8 | 53.4 | 54.0 | 11.1 | 58.1 | 39.2 | 27.2 |
120
+ | Ferret-UI | 8B | 152.5 | 22.9 | 1.2 | 7.1 | 7.8 | 15.9 | 5.5 | 3.9 | 1.9 |
121
+ | UGround | 7B | 1034.6 | 27.9 | 48.8 | 74.8 | 76.5 | 72.4 | 73.6 | 85.2 | 63.1 |
122
+ | OS-ATLAS-8B | 8B | 137.5 | 19.9 | 52.1 | 82.5 | 84.1 | 78.8 | 66.5 | 82.6 | 69.9 |
123
+ | Aguvis | 8B | 119.7 | 21.2 | 52.0 | 83.8 | 85.6 | 73.8 | 80.9 | 91.3 | 68.0 |
124
+ | Qwen2-VL | 2B | 58.8 | 16.4 | 7.1 | 17.9 | 18.6 | 28.8 | 29.2 | 17.9 | 17.5 |
125
+ | OS-ATLAS-4B | 4B | 137.3 | 31.4 | 44.6 | 66.8 | 68.7 | 75.4 | 77.1 | 47.7 | 58.3 |
126
+ | Ferret-UI | 3B | 69.5 | 9.8 | 1.3 | 2.1 | 1.9 | 5.5 | 1.1 | 0.7 | 1.0 |
127
+ | ShowUI | 2B | 79.7 | 14.7 | 39.9 | 76.1 | 77.4 | 72.3 | 58.4 | 64.2 | 55.3 |
128
+ | **GoClick-L (ours)** | 0.8B | 91.1 | 8.3 | **69.5** | **78.5** | **81.1** | **80.4** | **78.2** | **90.3** | **68.0** |
129
+ | **GoClick-B (ours)** | 0.2B | **37.7** | **4.1** | 64.4 | 74.1 | 75.2 | 76.8 | 71.9 | 90.3 | 61.2 |
130
+
131
+
132
+ ## πŸ“ Citation
133
+ If you use GoClick in your research, please cite our paper:
134
+
135
+ ```
136
+ @misc{li2026goclicklightweightelementgrounding,
137
+ title={GoClick: Lightweight Element Grounding Model for Autonomous GUI Interaction},
138
+ author={Hongxin Li and Yuntao Chen and Zhaoxiang Zhang},
139
+ year={2026},
140
+ eprint={2604.23941},
141
+ archivePrefix={arXiv},
142
+ primaryClass={cs.CV},
143
+ url={https://arxiv.org/abs/2604.23941},
144
+ }
145
+ ```