Update README.md
Browse files
README.md
CHANGED
@@ -15,16 +15,6 @@ Inference of this model is compatible with AutoGPTQ's Kernel.
|
|
15 |
|
16 |
|
17 |
|
18 |
-
### Evaluate the model
|
19 |
-
|
20 |
-
Install [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness.git) from source, we used the git id 96d185fa6232a5ab685ba7c43e45d1dbb3bb906d
|
21 |
-
|
22 |
-
```bash
|
23 |
-
lm_eval --model hf --model_args pretrained="Intel/gpt-neox-20b-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,rte,arc_easy,arc_challenge,mmlu --batch_size 32
|
24 |
-
```
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
### Reproduce the model
|
29 |
|
30 |
Here is the sample command to reproduce the model
|
@@ -49,6 +39,18 @@ python3 main.py \
|
|
49 |
|
50 |
|
51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
## Caveats and Recommendations
|
53 |
|
54 |
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
|
|
|
15 |
|
16 |
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
### Reproduce the model
|
19 |
|
20 |
Here is the sample command to reproduce the model
|
|
|
39 |
|
40 |
|
41 |
|
42 |
+
|
43 |
+
### Evaluate the model
|
44 |
+
|
45 |
+
Install [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness.git) from source, we used the git id 96d185fa6232a5ab685ba7c43e45d1dbb3bb906d
|
46 |
+
|
47 |
+
```bash
|
48 |
+
lm_eval --model hf --model_args pretrained="Intel/gpt-neox-20b-int4-inc",autogptq=True,gptq_use_triton=True --device cuda:0 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,rte,arc_easy,arc_challenge,mmlu --batch_size 32
|
49 |
+
```
|
50 |
+
|
51 |
+
|
52 |
+
|
53 |
+
|
54 |
## Caveats and Recommendations
|
55 |
|
56 |
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
|