n1ck-guo commited on
Commit
702dc57
·
verified ·
1 Parent(s): 3efe8d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -5,7 +5,7 @@ datasets:
5
  ---
6
  ## Model Details
7
 
8
- This model is an int4 model with group_size128 and sym quantization of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) generated by [intel/auto-round](https://github.com/intel/auto-round). We found there is a large accuracy drop of asym kernel for this model.
9
 
10
 
11
 
@@ -70,7 +70,7 @@ She is curious and brave and
70
 
71
  ### Evaluate the model
72
 
73
- pip install lm-eval==0.4.2
74
 
75
  ```bash
76
  auto-round --eval --model Intel/phi-2-int4-inc --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu --batch_size 16
 
5
  ---
6
  ## Model Details
7
 
8
+ This model is an int4 model with group_size128 and sym quantization of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) generated by [intel/auto-round](https://github.com/intel/auto-round). We found there is a large accuracy drop of asym kernel for this model. If you need AutoGPTQ format, please load the model with revision 5973e3a
9
 
10
 
11
 
 
70
 
71
  ### Evaluate the model
72
 
73
+ pip install lm-eval==0.4.4
74
 
75
  ```bash
76
  auto-round --eval --model Intel/phi-2-int4-inc --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu --batch_size 16