JosephusCheung commited on
Commit
6bd26d0
1 Parent(s): 9edc20f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -3,6 +3,9 @@ license: gpl
3
  ---
4
  # CausalLM 34B β
5
 
 
 
 
6
  There are some issues with the model weights in terms of precision. In the next version update, we will roll back some progress and retrain to fix these issues as soon as possible.
7
 
8
  **Please note:** Do not use "accelerated inference frameworks" like **VLLM** temporarily. Instead, use Transformers for inference. Otherwise, due to precision issues, the output quality will be significantly degraded. If you need faster inference, you can consider using the q8_0 quantization (faster and better than bf16 vllm for this model only) with llama.cpp temporarily or wait for the official version.
 
3
  ---
4
  # CausalLM 34B β
5
 
6
+ ## PROMPT FORMAT:
7
+ [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
8
+
9
  There are some issues with the model weights in terms of precision. In the next version update, we will roll back some progress and retrain to fix these issues as soon as possible.
10
 
11
  **Please note:** Do not use "accelerated inference frameworks" like **VLLM** temporarily. Instead, use Transformers for inference. Otherwise, due to precision issues, the output quality will be significantly degraded. If you need faster inference, you can consider using the q8_0 quantization (faster and better than bf16 vllm for this model only) with llama.cpp temporarily or wait for the official version.