Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,7 @@ tags:
|
|
7 |
---
|
8 |
Converted for use with [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
9 |
---
|
|
|
10 |
- 4-bit quantized
|
11 |
- Needs ~6GB of CPU RAM
|
12 |
- Won't work with alpaca.cpp or old llama.cpp (new ggml format requires latest llama.cpp)
|
|
|
7 |
---
|
8 |
Converted for use with [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
9 |
---
|
10 |
+
- Based on AlekseyKorshuk/vicuna-7b
|
11 |
- 4-bit quantized
|
12 |
- Needs ~6GB of CPU RAM
|
13 |
- Won't work with alpaca.cpp or old llama.cpp (new ggml format requires latest llama.cpp)
|