lvkaokao
commited on
Commit
·
744aab7
1
Parent(s):
4121a2d
update.
Browse files- src/display/about.py +0 -10
src/display/about.py
CHANGED
@@ -49,16 +49,6 @@ python main.py --model=hf-causal-experimental
|
|
49 |
--output_path=<output_path>
|
50 |
```
|
51 |
|
52 |
-
```
|
53 |
-
python main.py --model=hf-causal-experimental
|
54 |
-
--model_args="pretrained=<your_model>,use_accelerate=True,revision=<your_model_revision>"
|
55 |
-
--tasks=<task_list>
|
56 |
-
--num_fewshot=<n_few_shot>
|
57 |
-
--batch_size=1
|
58 |
-
--output_path=<output_path>
|
59 |
-
|
60 |
-
```
|
61 |
-
|
62 |
**Note:**
|
63 |
- We run `llama.cpp` series models on Xeon CPU and others on NVidia GPU.
|
64 |
- If model paramerters > 7B, we use `--batch_size 4`. If model parameters < 7B, we use `--batch_size 2`. And we set `--batch_size 1` for llama.cpp. You can expect results to vary slightly for different batch sizes because of padding.
|
|
|
49 |
--output_path=<output_path>
|
50 |
```
|
51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
**Note:**
|
53 |
- We run `llama.cpp` series models on Xeon CPU and others on NVidia GPU.
|
54 |
- If model paramerters > 7B, we use `--batch_size 4`. If model parameters < 7B, we use `--batch_size 2`. And we set `--batch_size 1` for llama.cpp. You can expect results to vary slightly for different batch sizes because of padding.
|