TheBloke commited on
Commit
8477d26
1 Parent(s): de3eb73

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -6,8 +6,9 @@ This repo contains the weights of the Koala 7B model produced at Berkeley. It is
6
 
7
  This version has then been quantized to 4bit using https://github.com/qwopqwop200/GPTQ-for-LLaMa
8
 
9
- For the unquantized model in HF format, see this repo: https://huggingface.co/TheBloke/koala-7B-HF
10
- For the unquantized model in GGML format for llama.cpp, see this repo: https://huggingface.co/TheBloke/koala-7b-ggml-unquantized
 
11
 
12
  ### WARNING: At the present time the GPTQ files uploaded here seem to be producing garbage output. It is not recommended to use them.
13
 
 
6
 
7
  This version has then been quantized to 4bit using https://github.com/qwopqwop200/GPTQ-for-LLaMa
8
 
9
+ These other versions are also available:
10
+ * [Unquantized model in HF format](https://huggingface.co/TheBloke/koala-7B-HF)
11
+ * [Unquantized model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized)
12
 
13
  ### WARNING: At the present time the GPTQ files uploaded here seem to be producing garbage output. It is not recommended to use them.
14