Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,23 @@ This repo contains the weights of the Koala 7B model produced at Berkeley. It is
|
|
7 |
|
8 |
This version has then been converted to HF format.
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
```
|
12 |
git clone https://github.com/young-geng/EasyLM
|
13 |
|
|
|
7 |
|
8 |
This version has then been converted to HF format.
|
9 |
|
10 |
+
## My Koala repos
|
11 |
+
I have the following Koala model repositories available:
|
12 |
+
|
13 |
+
**13B models:**
|
14 |
+
* [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF)
|
15 |
+
* [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g)
|
16 |
+
* [GPTQ quantized 4bit 13B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g-GGML)
|
17 |
+
|
18 |
+
**7B models:**
|
19 |
+
* [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF)
|
20 |
+
* [Unquantized 7B model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized)
|
21 |
+
* [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g)
|
22 |
+
* [GPTQ quantized 4bit 7B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g-GGML)
|
23 |
+
|
24 |
+
## How the Koala delta weights were merged
|
25 |
+
|
26 |
+
The Koala delta weights were merged using the following commands:
|
27 |
```
|
28 |
git clone https://github.com/young-geng/EasyLM
|
29 |
|