Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ inference: false
|
|
32 |
*Last updated on 2023-09-28.*
|
33 |
|
34 |
**Description:**
|
35 |
-
- The motivation
|
36 |
|
37 |
**RAM usage (WIP):**
|
38 |
Model | Startup RAM usage (KoboldCpp)
|
|
|
32 |
*Last updated on 2023-09-28.*
|
33 |
|
34 |
**Description:**
|
35 |
+
- The motivation behind these quantizations was that latestissue's quants were missing the 0.1B and 0.4B models. The rest of the models can be found here: [latestissue/rwkv-4-world-ggml-quantized](https://huggingface.co/latestissue/rwkv-4-world-ggml-quantized)
|
36 |
|
37 |
**RAM usage (WIP):**
|
38 |
Model | Startup RAM usage (KoboldCpp)
|