dranger003 commited on
Commit
9c3a8a1
1 Parent(s): 6701a29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -6,7 +6,7 @@ library_name: gguf
6
  GGUF importance matrix (imatrix) quants for https://huggingface.co/abacusai/Smaug-Mixtral-v0.1
7
  The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384).
8
 
9
- **NOTE**: The new IQ3_M/IQ3_S/Q3_K_XS quants are currently causing a segfault during quantization, so I'll upload them once llama.cpp gets fixed. The imatrix is being used on the K-quants as well.
10
  | Layers | Context | [Template](https://huggingface.co/abacusai/Smaug-Mixtral-v0.1/blob/main/tokenizer_config.json#L32) |
11
  | --- | --- | --- |
12
  | <pre>32</pre> | <pre>32768</pre> | <pre>\<s\>[INST] {prompt} [/INST]<br>{response}</pre> |
 
6
  GGUF importance matrix (imatrix) quants for https://huggingface.co/abacusai/Smaug-Mixtral-v0.1
7
  The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384).
8
 
9
+ **NOTE**: The new IQ3_M/IQ3_S/Q3_K_XS quants are currently causing a segfault during quantization, so I'll upload them once llama.cpp gets fixed. The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well.
10
  | Layers | Context | [Template](https://huggingface.co/abacusai/Smaug-Mixtral-v0.1/blob/main/tokenizer_config.json#L32) |
11
  | --- | --- | --- |
12
  | <pre>32</pre> | <pre>32768</pre> | <pre>\<s\>[INST] {prompt} [/INST]<br>{response}</pre> |