TheBloke commited on
Commit
05b36bd
1 Parent(s): 034e79a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -8,7 +8,7 @@ This version has then been quantized to 4-bit using [GPTQ-for-LLaMa](https://git
8
 
9
  ## Other Koala repos
10
 
11
- I have also made these other Koala repose available:
12
  * [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g)
13
  * [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF)
14
  * [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF)
@@ -28,17 +28,17 @@ I created this model using the latest Triton branch of GPTQ-for-LLaMa but it can
28
 
29
  I have provided both a `pt` and `safetensors` file. Either should work.
30
 
31
- If both are present in the model directory for text-generation-webui I am not sure which it picks, so if you need one or the other specifically I'd recommend just downloading the one you need.
32
 
33
  The `olderFormat` file was created with the aim of then converting it to GGML for use with [llama.cpp](https://github.com/ggerganov/llama.cpp). At present this file does not work.
34
 
35
- ## How to run with text-generation-webui
36
 
37
- The model files provided will not load as-is with [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
38
 
39
- They require the latest version of the GPTQ code.
40
 
41
- Here are the commands I used to clone GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
42
  ```
43
  git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
44
  git clone https://github.com/oobabooga/text-generation-webui
@@ -46,15 +46,15 @@ mkdir -p text-generation-webui/repositories
46
  ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa
47
  ```
48
 
49
- Then install this model into `text-generation-webui/models` and run text-generation-webui as follows:
50
  ```
51
  cd text-generation-webui
52
- python server.py --model koala-7B-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
53
  ```
54
 
55
  The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
56
 
57
- If you cannot use the Triton branch of GPTQ for any reason, it should also work to use the CUDA branch instead:
58
  ```
59
  git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
60
  cd GPTQ-for-LLaMa
@@ -70,7 +70,7 @@ git clone https://github.com/young-geng/EasyLM
70
 
71
  git clone https://huggingface.co/nyanko7/LLaMA-7B
72
 
73
- git clone https://huggingface.co/young-geng/koala koala_diffs
74
 
75
  cd EasyLM
76
 
 
8
 
9
  ## Other Koala repos
10
 
11
+ I have also made these other Koala models available:
12
  * [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g)
13
  * [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF)
14
  * [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF)
 
28
 
29
  I have provided both a `pt` and `safetensors` file. Either should work.
30
 
31
+ If both are present in the model directory for text-generation-webui I am not sure which it chooses, so you may want to place only one in the models folder.
32
 
33
  The `olderFormat` file was created with the aim of then converting it to GGML for use with [llama.cpp](https://github.com/ggerganov/llama.cpp). At present this file does not work.
34
 
35
+ ## How to run with `text-generation-webui`
36
 
37
+ GPTQ model files provided will not load as-is with [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
38
 
39
+ These model files require the latest version of the GPTQ code.
40
 
41
+ Here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
42
  ```
43
  git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
44
  git clone https://github.com/oobabooga/text-generation-webui
 
46
  ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa
47
  ```
48
 
49
+ Then install this model into `text-generation-webui/models` and launch the UI as follows:
50
  ```
51
  cd text-generation-webui
52
+ python server.py --model koala-7B-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
53
  ```
54
 
55
  The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
56
 
57
+ If you cannot use the Triton branch of GPTQ for any reason, you can alternatively use the CUDA branch instead:
58
  ```
59
  git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
60
  cd GPTQ-for-LLaMa
 
70
 
71
  git clone https://huggingface.co/nyanko7/LLaMA-7B
72
 
73
+ mkdir koala_diffs && cd koala_diffs && wget https://huggingface.co/young-geng/koala/resolve/main/koala_7b_diff_v2
74
 
75
  cd EasyLM
76