techdrizzdev commited on
Commit
5bdd858
·
verified ·
1 Parent(s): 52b5e0d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -1
README.md CHANGED
@@ -48,4 +48,38 @@ Each entry in `metadata.jsonl` follows this schema:
48
  "bbox": [42, 733, 1038, 901],
49
  "app_name": "com.duolingo",
50
  "function": "tap_call_llm"
51
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  "bbox": [42, 733, 1038, 901],
49
  "app_name": "com.duolingo",
50
  "function": "tap_call_llm"
51
+ }
52
+
53
+ ---
54
+
55
+ ## 📈 Benchmark Results
56
+
57
+ We evaluated **UI-TapBench** across leading Large Multimodal Models (LMMs) to measure tap accuracy, spatial and precision for mobile UI interactions.
58
+
59
+ ### 🏆 Drizz Benchmark Result
60
+
61
+ | Model | Accuracy | Precision | Recall | F1 Score |
62
+ |---|---:|---:|---:|---:|---:|---:|
63
+ |Drizz | 94.51 | 96.22 | 98.16 | 97.18 |
64
+
65
+ This represents the benchmark achieved by the **Drizz evaluation framework** on UI-TapBench.
66
+
67
+ ---
68
+
69
+ ### 🔍 Competitor Comparison
70
+
71
+ | Model | Accuracy | Precision | Recall | F1 Score |
72
+ |---|---:|---:|---:|---:|---:|---:|
73
+ | gpt-5.1 | 21.72 | 23.35 | 75.61 | 35.68 |
74
+ | gpt-5.2 | 44.83 | 45.71 | 95.88 | 61.91 |
75
+ | gemini-pro | 89.84 | 91.28 | 98.28 | 94.65 |
76
+ | gemini-flash | 81.44 | 83.78 | 96.67 | 89.77 |
77
+ | qwen3.5-27b | 92.98 | 94.98 | 97.61 | 96.28 |
78
+
79
+ ---
80
+
81
+ ### 💡 Key Takeaway
82
+
83
+ The results show that while several models perform well on general UI grounding tasks, **Drizz** demonstrates the highest benchmark performance on **UI-TapBench**, achieving strong spatial precision and reliable tap execution even in dense mobile UI layouts.
84
+
85
+ This highlights the importance of evaluating not just reasoning quality, but exact coordinate prediction accuracy for real-world mobile automation systems.