TheBloke commited on
Commit
3a6f4ab
1 Parent(s): 73d6e2b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -8,7 +8,7 @@ inference: false
8
  ---
9
  # Wizard-Vicuna-13B-GGML
10
 
11
- This is GGML format quantised 4bit and 5bit models of [junlee's wizard-vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b).
12
 
13
  It is the result of quantising to 4bit and 5bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
14
 
@@ -66,7 +66,9 @@ Note: at this time text-generation-webui may not support the new q5 quantisation
66
 
67
  **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that these files can be used in the UI.
68
 
69
- # Original wizard-vicuna-13B model card
 
 
70
 
71
  # WizardVicunaLM
72
  ### Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method
 
8
  ---
9
  # Wizard-Vicuna-13B-GGML
10
 
11
+ This is GGML format quantised 4bit and 5bit models of [junelee's wizard-vicuna 13B](https://huggingface.co/junelee/wizard-vicuna-13b).
12
 
13
  It is the result of quantising to 4bit and 5bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
14
 
 
66
 
67
  **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that these files can be used in the UI.
68
 
69
+ # Original WizardVicuna-13B model card
70
+
71
+ Github page: https://github.com/melodysdreamj/WizardVicunaLM
72
 
73
  # WizardVicunaLM
74
  ### Wizard's dataset + ChatGPT's conversation extension + Vicuna's tuning method