Upload README.md
Browse files
README.md
CHANGED
@@ -29,6 +29,21 @@ We've simplified the raw dump from both Multimodal-Mind2Web and Mind2Web into se
|
|
29 |
|
30 |
We're currently evaluating state-of-the-art models on the dataset and are gradually providing access to a more comprehensive Gym-compatible evaluation environment. This environment will allow for offline and online evaluations of agents, offering more structural and fundamental improvements over existing benchmarks like MultiModal-Mind2Web. We will share our findings and release the full leaderboard in a blog post on <https://engineering.rabbit.tech/> soon.
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
### Dataset Structure
|
34 |
|
|
|
29 |
|
30 |
We're currently evaluating state-of-the-art models on the dataset and are gradually providing access to a more comprehensive Gym-compatible evaluation environment. This environment will allow for offline and online evaluations of agents, offering more structural and fundamental improvements over existing benchmarks like MultiModal-Mind2Web. We will share our findings and release the full leaderboard in a blog post on <https://engineering.rabbit.tech/> soon.
|
31 |
|
32 |
+
### Preliminary Evaluation Results
|
33 |
+
|
34 |
+
* Operation token F1 is calculated with respect to `cl100k_base`. We preprocess the text to be lower-case regardless of what the VLM outputs.
|
35 |
+
* Raw VLM outputs are parsed in a similar fashion according to SeeAct, which we will explain in more detail in the blog post.
|
36 |
+
* For all metrics, higher is better.
|
37 |
+
|
38 |
+
| model | Step Success Rate | Task Success Rate | Operation Token F1 | Element Accuracy |
|
39 |
+
|:---------------------------|:--------------------|:--------------------|:---------------------|:-------------------|
|
40 |
+
| claude-3-5-sonnet-20240620 | **0.3847** | **0.0352** | **0.8104** | **0.5005** |
|
41 |
+
| gemini-1.5-flash-001 | 0.3203 | 0.0300 | 0.7764 | 0.3861 |
|
42 |
+
| claude-3-opus-20240229 | 0.3048 | 0.0141 | 0.8048 | 0.3720 |
|
43 |
+
| claude-3-sonnet-20240229 | 0.2770 | 0.0282 | 0.7241 | 0.3528 |
|
44 |
+
| gpt-4o | 0.2702 | 0.0211 | 0.6239 | 0.3602 |
|
45 |
+
| gemini-1.5-pro-001 | 0.2191 | 0.0000 | 0.7151 | 0.3453 |
|
46 |
+
| claude-3-haiku-20240307 | 0.2068 | 0.0000 | 0.7835 | 0.2577 |
|
47 |
|
48 |
### Dataset Structure
|
49 |
|