GGUF
English
maddes8cht commited on
Commit
78a798a
•
1 Parent(s): bdd241a

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -15,9 +15,12 @@ These will contain increasingly more content to help find the best models for a
15
  # falcon-7b - GGUF
16
  - Model creator: [tiiuae](https://huggingface.co/tiiuae)
17
  - Original model: [falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
 
18
  These are gguf quantized models of the riginal Falcon 7B Model by tiiuae.
19
  Falcon is a foundational large language model coming in two different sizes: 7b and 40b.
20
 
 
 
21
  # About GGUF format
22
 
23
  `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
@@ -41,6 +44,7 @@ So, if possible, use K-quants.
41
  With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
42
 
43
 
 
44
  # Original Model Card:
45
  # 🚀 Falcon-7B
46
 
 
15
  # falcon-7b - GGUF
16
  - Model creator: [tiiuae](https://huggingface.co/tiiuae)
17
  - Original model: [falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
18
+
19
  These are gguf quantized models of the riginal Falcon 7B Model by tiiuae.
20
  Falcon is a foundational large language model coming in two different sizes: 7b and 40b.
21
 
22
+
23
+
24
  # About GGUF format
25
 
26
  `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
 
44
  With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
45
 
46
 
47
+
48
  # Original Model Card:
49
  # 🚀 Falcon-7B
50