dranger003 commited on
Commit
5eb0da1
1 Parent(s): 811aa0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -4
README.md CHANGED
@@ -2,11 +2,18 @@
2
  license: other
3
  license_name: tongyi-qianwen
4
  license_link: https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE
5
- library_name: gguf
6
  pipeline_tag: text-generation
 
 
7
  ---
8
- GGUF importance matrix (imatrix) quants for https://huggingface.co/Qwen/Qwen1.5-72B-Chat
 
 
 
 
 
 
9
 
10
- | Layers | Context | Template |
11
  | --- | --- | --- |
12
- | <pre>80</pre> | <pre>32768</pre> | <pre><\|im_start\|>system<br>{instructions}<\|im_end\|><br><\|im_start\|>user<br>{prompt}<\|im_end\|><br><\|im_start\|>assistant<br>{response}</pre> |
 
2
  license: other
3
  license_name: tongyi-qianwen
4
  license_link: https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE
 
5
  pipeline_tag: text-generation
6
+ library_name: gguf
7
+ base_model: Qwen/Qwen1.5-72B-Chat
8
  ---
9
+ <u>**NOTE**</u>: You will need a recent build of llama.cpp to run these quants (i.e. at least commit `494c870`).
10
+
11
+ **2023-03-07**: Updating quants using latest build as things seem to have stabilized a bit now.
12
+
13
+ GGUF importance matrix (imatrix) quants for https://huggingface.co/Qwen/Qwen1.5-72B-Chat
14
+ * The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384).
15
+ * The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well.
16
 
17
+ | Layers | Context | [Template](https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/tokenizer_config.json#L31) |
18
  | --- | --- | --- |
19
+ | <pre>80</pre> | <pre>32768</pre> | <pre><\|im_start\|>system<br>{instructions}<\|im_end\|><br><\|im_start\|>user<br>{prompt}<\|im_end\|><br><\|im_start\|>assistant<br>{response}</pre> |