Muning9999
commited on
Commit
•
f822d05
1
Parent(s):
43f4c5d
Update README.md
Browse files
README.md
CHANGED
@@ -22,13 +22,15 @@ First, we evaluate Hammer series on the Berkeley Function-Calling Leaderboard (B
|
|
22 |
<img src="figures/bfcl.PNG" alt="overview" width="1480" style="margin: auto;">
|
23 |
</div>
|
24 |
|
|
|
|
|
25 |
In addition, we evaluated our Hammer series (1.5b, 4b, 7b) on other academic benchmarks to further show our model's generalization ability:
|
26 |
|
27 |
<div style="text-align: center;">
|
28 |
<img src="figures/others.PNG" alt="overview" width="1000" style="margin: auto;">
|
29 |
</div>
|
30 |
|
31 |
-
|
32 |
|
33 |
## Requiements
|
34 |
The code of Hammer-4b has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
|
|
|
22 |
<img src="figures/bfcl.PNG" alt="overview" width="1480" style="margin: auto;">
|
23 |
</div>
|
24 |
|
25 |
+
The above table indicates that within the BFCL framework, our Hammer series consistently achieves corresponding sota performance at comparable scales, particularly Hammer-7B, whose overall performance ranks second only to the proprietary GPT-4.
|
26 |
+
|
27 |
In addition, we evaluated our Hammer series (1.5b, 4b, 7b) on other academic benchmarks to further show our model's generalization ability:
|
28 |
|
29 |
<div style="text-align: center;">
|
30 |
<img src="figures/others.PNG" alt="overview" width="1000" style="margin: auto;">
|
31 |
</div>
|
32 |
|
33 |
+
Upon observing Hammer's performance across various benchmarks unrelated to the APIGen Function Calling Datasets, we find that Hammer demonstrates remarkably stable performance, which indicates the robustness of Hammers. In contrast, the baseline methods exhibit varying degrees of effectiveness across these other benchmarks.
|
34 |
|
35 |
## Requiements
|
36 |
The code of Hammer-4b has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
|