ikawrakow commited on
Commit
8ad9bf8
·
1 Parent(s): d28c091

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -1,3 +1,17 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ This repository contains 2-bit quantized LLaMA-v1 models in GGUF format for use with [llama.cpp](https://github.com/ggerganov/llama.cpp).
6
+ All tensors are quantized with `Q2_K`, except for `output.weight`, which is `Q6_K`, and, in the case of LLaMA-v2-70B, `attn_v`, which is `Q4_K`.
7
+ The quantized models differ from the standard `llama.cpp` 2-bit quantization in two ways:
8
+ * These are actual 2-bit quantized models instead of the mostly 3-bit quantization provided by the standard `llama.cpp` `Q2_K` quantization method
9
+ * The models were prepared with a refined (but not yet published) k-quants quantization approach
10
+
11
+ The table shows Wikitext perplexities for a context length of 2048 tokens computed with these models using `llama.cpp`
12
+ | Model | Perplexity |
13
+ |---|---|
14
+ | 7B | 6.4023 |
15
+ | 13B | 5.3967 |
16
+ | 30B | 4.5065 |
17
+ | 65B | 3.9136 |