techdrizzdev commited on
Commit
cf7d81c
·
verified ·
1 Parent(s): 5bdd858

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -20
README.md CHANGED
@@ -56,30 +56,17 @@ Each entry in `metadata.jsonl` follows this schema:
56
 
57
  We evaluated **UI-TapBench** across leading Large Multimodal Models (LMMs) to measure tap accuracy, spatial and precision for mobile UI interactions.
58
 
59
- ### 🏆 Drizz Benchmark Result
60
-
61
- | Model | Accuracy | Precision | Recall | F1 Score |
62
- |---|---:|---:|---:|---:|---:|---:|
63
- |Drizz | 94.51 | 96.22 | 98.16 | 97.18 |
64
-
65
- This represents the benchmark achieved by the **Drizz evaluation framework** on UI-TapBench.
66
-
67
- ---
68
-
69
  ### 🔍 Competitor Comparison
70
 
71
- | Model | Accuracy | Precision | Recall | F1 Score |
72
- |---|---:|---:|---:|---:|---:|---:|
73
- | gpt-5.1 | 21.72 | 23.35 | 75.61 | 35.68 |
74
- | gpt-5.2 | 44.83 | 45.71 | 95.88 | 61.91 |
 
75
  | gemini-pro | 89.84 | 91.28 | 98.28 | 94.65 |
76
- | gemini-flash | 81.44 | 83.78 | 96.67 | 89.77 |
77
- | qwen3.5-27b | 92.98 | 94.98 | 97.61 | 96.28 |
78
-
79
- ---
80
 
81
  ### 💡 Key Takeaway
82
 
83
  The results show that while several models perform well on general UI grounding tasks, **Drizz** demonstrates the highest benchmark performance on **UI-TapBench**, achieving strong spatial precision and reliable tap execution even in dense mobile UI layouts.
84
-
85
- This highlights the importance of evaluating not just reasoning quality, but exact coordinate prediction accuracy for real-world mobile automation systems.
 
56
 
57
  We evaluated **UI-TapBench** across leading Large Multimodal Models (LMMs) to measure tap accuracy, spatial and precision for mobile UI interactions.
58
 
 
 
 
 
 
 
 
 
 
 
59
  ### 🔍 Competitor Comparison
60
 
61
+ | Model | Accuracy | Precision | Recall | F1 Score |
62
+ |---|---:|---:|---:|---:|
63
+ | **Drizz (ours)** | **94.51** | **96.22** | **98.16** | **97.18** |
64
+ | gpt-5.1 | 21.72 | 23.35 | 75.61 | 35.68 |
65
+ | gpt-5.2 | 44.83 | 45.71 | 95.88 | 61.91 |
66
  | gemini-pro | 89.84 | 91.28 | 98.28 | 94.65 |
67
+ | gemini-flash | 81.44 | 83.78 | 96.67 | 89.77 |
68
+ | qwen3.5-27b | 92.98 | 94.98 | 97.61 | 96.28 |
 
 
69
 
70
  ### 💡 Key Takeaway
71
 
72
  The results show that while several models perform well on general UI grounding tasks, **Drizz** demonstrates the highest benchmark performance on **UI-TapBench**, achieving strong spatial precision and reliable tap execution even in dense mobile UI layouts.