ikawrakow commited on
Commit
d38bb73
1 Parent(s): 4b035ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -6,4 +6,4 @@ This repository contains 2-bit quantized LLaMA-v2 models in GGUF format for use
6
  All tensors are quantized with `Q2_K`, except for `output.weight`, which is `Q6_K`, and, in the case of LLaMA-v2-70B, `attn_v`, which is `Q4_K`.
7
  The quantized models differ from the standard `llama.cpp` 2-bit quantization in two ways:
8
  * These are actual 2-bit quantized models instead of the mostly 3-bit quantization provided by the standard `llama.cpp` `Q2_K` quantization method
9
- * The models were prepared a refined (but not yet published) k-quants quantization approach
 
6
  All tensors are quantized with `Q2_K`, except for `output.weight`, which is `Q6_K`, and, in the case of LLaMA-v2-70B, `attn_v`, which is `Q4_K`.
7
  The quantized models differ from the standard `llama.cpp` 2-bit quantization in two ways:
8
  * These are actual 2-bit quantized models instead of the mostly 3-bit quantization provided by the standard `llama.cpp` `Q2_K` quantization method
9
+ * The models were prepared with a refined (but not yet published) k-quants quantization approach