Update README.md
Browse files
README.md
CHANGED
@@ -41,7 +41,7 @@ print(text)
|
|
41 |
|
42 |
### Evaluate the model
|
43 |
|
44 |
-
Install [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness.git) from source
|
45 |
|
46 |
Since we encountered an issue evaluating this model with lm-eval, we opted to evaluate the qdq model instead. In our assessment, we found that its accuracy closely matches that of the real quantized model in most cases except for some small models like opt-125m.
|
47 |
|
|
|
41 |
|
42 |
### Evaluate the model
|
43 |
|
44 |
+
Install [lm-eval-harness 0.4.2](https://github.com/EleutherAI/lm-evaluation-harness.git) from source.
|
45 |
|
46 |
Since we encountered an issue evaluating this model with lm-eval, we opted to evaluate the qdq model instead. In our assessment, we found that its accuracy closely matches that of the real quantized model in most cases except for some small models like opt-125m.
|
47 |
|