Update README.md
Browse files
README.md
CHANGED
@@ -47,20 +47,19 @@ Since we encountered an issue evaluating this model with lm-eval, we opted to ev
|
|
47 |
|
48 |
|
49 |
|
50 |
-
| Metric |
|
51 |
| -------------- | ------ | ------ |
|
52 |
-
| Avg. | 0.
|
53 |
-
| mmlu | 0.
|
54 |
-
| lambada_openai | 0.
|
55 |
-
| hellaswag | 0.
|
56 |
-
| winogrande | 0.
|
57 |
| piqa | 0.7867 | 0.7824 |
|
58 |
-
| truthfulqa_mc1 | 0.
|
59 |
-
| openbookqa | 0.
|
60 |
-
| boolq | 0.
|
61 |
-
|
|
62 |
-
|
|
63 |
-
| arc_challenge | 0.5290 | 0.5196 |
|
64 |
|
65 |
|
66 |
### Reproduce the model
|
|
|
47 |
|
48 |
|
49 |
|
50 |
+
| Metric | BF16 | INT4 qdq |
|
51 |
| -------------- | ------ | ------ |
|
52 |
+
| Avg. | 0.6138 | 0.6115 |
|
53 |
+
| mmlu | 0.5325 | 0.5417 |
|
54 |
+
| lambada_openai | 0.6276 | 0.6225 |
|
55 |
+
| hellaswag | 0.5584 | 0.5498 |
|
56 |
+
| winogrande | 0.7561 | 0.7545 |
|
57 |
| piqa | 0.7867 | 0.7824 |
|
58 |
+
| truthfulqa_mc1 | 0.3146 | 0.3060 |
|
59 |
+
| openbookqa | 0.4020 | 0.4100 |
|
60 |
+
| boolq | 0.8330 | 0.8327 |
|
61 |
+
| arc_easy | 0.7992 | 0.7955 |
|
62 |
+
| arc_challenge | 0.5282 | 0.5196 |
|
|
|
63 |
|
64 |
|
65 |
### Reproduce the model
|