Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# RWKV-6-World-1.6B-GGUF-Q4_K_M
|
2 |
|
3 |
This repo contains the RWKV-6-World-1.6B-GGUF quantized with the latest llama.cpp(b3651).
|
@@ -9,7 +20,7 @@ This repo contains the RWKV-6-World-1.6B-GGUF quantized with the latest llama.cp
|
|
9 |
git clone https://github.com/ggerganov/llama.cpp
|
10 |
```
|
11 |
|
12 |
-
* Download the GGUF file to a new model folder in llama.cpp(
|
13 |
```
|
14 |
cd llama.cpp
|
15 |
mkdir model
|
|
|
1 |
+
---
|
2 |
+
base_model: RWKV/rwkv-6-world-1b6
|
3 |
+
library_name: gguf
|
4 |
+
license: apache-2.0
|
5 |
+
quantized_by: Lyte
|
6 |
+
tags:
|
7 |
+
- text-generation
|
8 |
+
- rwkv
|
9 |
+
- rwkv-6
|
10 |
+
---
|
11 |
+
|
12 |
# RWKV-6-World-1.6B-GGUF-Q4_K_M
|
13 |
|
14 |
This repo contains the RWKV-6-World-1.6B-GGUF quantized with the latest llama.cpp(b3651).
|
|
|
20 |
git clone https://github.com/ggerganov/llama.cpp
|
21 |
```
|
22 |
|
23 |
+
* Download the GGUF file to a new model folder in llama.cpp(choose your quant):
|
24 |
```
|
25 |
cd llama.cpp
|
26 |
mkdir model
|