feihu.hf
commited on
Commit
•
3a70652
1
Parent(s):
dd40fc8
update readme
Browse files
README.md
CHANGED
@@ -78,8 +78,9 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
78 |
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-7B-Chat-GPTQ`, `Qwen1.5-7B-Chat-AWQ`, and `Qwen1.5-7B-Chat-GGUF`.
|
79 |
|
80 |
|
81 |
-
##
|
82 |
-
|
|
|
83 |
|
84 |
|
85 |
## Citation
|
@@ -93,4 +94,4 @@ If you find our work helpful, feel free to give us a cite.
|
|
93 |
journal={arXiv preprint arXiv:2309.16609},
|
94 |
year={2023}
|
95 |
}
|
96 |
-
```
|
|
|
78 |
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-7B-Chat-GPTQ`, `Qwen1.5-7B-Chat-AWQ`, and `Qwen1.5-7B-Chat-GGUF`.
|
79 |
|
80 |
|
81 |
+
## Tips
|
82 |
+
|
83 |
+
* If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`.
|
84 |
|
85 |
|
86 |
## Citation
|
|
|
94 |
journal={arXiv preprint arXiv:2309.16609},
|
95 |
year={2023}
|
96 |
}
|
97 |
+
```
|