MetaIX commited on
Commit
59e8a06
1 Parent(s): 139d845

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -1,17 +1,17 @@
1
  <p><strong><font size="5">Information</font></strong></p>
2
  GPT4-X-Alpaca 30B 4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.
3
  <p>There are 3 quantized versions, one is quantized using GPTQ's <i>--true-sequential</i> and <i>--act-order</i> optimizations, the second is quantized using GPTQ's <i>--true-sequential</i> and <i>--groupsize 128</i> optimization, and the third one is quantized for GGML using q4_1</p>
4
- This was made using Chansung's GPT4-Alpaca Lora: https://huggingface.co/chansung/gpt4-alpaca-lora-30b
5
 
6
  <p><strong>GPU/GPTQ Usage</strong></p>
7
  <p>To use with your GPU using GPTQ pick one of the .safetensors along with all of the .jsons and .model files.</p>
8
- <p>Oobabooga: If you require further instruction, see https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md and https://github.com/oobabooga/text-generation-webui/blob/main/docs/LLaMA-model.md</p>
9
- <p>KoboldAI: If you require further instruction, see https://github.com/0cc4m/KoboldAI</p>
10
 
11
  <p><strong>CPU/GGML Usage</strong></p>
12
  <p>To use your CPU using GGML(Llamacpp) you only need the single .bin ggml file.</p>
13
- <p>Oobabooga: If you require further instruction, see https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md</p>
14
- <p>KoboldAI: If you require further instruction, see https://github.com/LostRuins/koboldcpp </p>
15
 
16
  <p><strong>Training Parameters</strong></p>
17
  <ul><li>num_epochs=10</li><li>cutoff_len=512</li><li>group_by_length</li><li>lora_target_modules='[q_proj,k_proj,v_proj,o_proj]'</li><li>lora_r=16</li><li>micro_batch_size=8</li></ul>
 
1
  <p><strong><font size="5">Information</font></strong></p>
2
  GPT4-X-Alpaca 30B 4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.
3
  <p>There are 3 quantized versions, one is quantized using GPTQ's <i>--true-sequential</i> and <i>--act-order</i> optimizations, the second is quantized using GPTQ's <i>--true-sequential</i> and <i>--groupsize 128</i> optimization, and the third one is quantized for GGML using q4_1</p>
4
+ This was made using <a href="https://huggingface.co/chansung/gpt4-alpaca-lora-30b">Chansung's GPT4-Alpaca Lora</a>
5
 
6
  <p><strong>GPU/GPTQ Usage</strong></p>
7
  <p>To use with your GPU using GPTQ pick one of the .safetensors along with all of the .jsons and .model files.</p>
8
+ <p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md">here</a> and <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/LLaMA-model.md">here</a></p>
9
+ <p>KoboldAI: If you require further instruction, see <a href="https://github.com/0cc4m/KoboldAI">here</a></p>
10
 
11
  <p><strong>CPU/GGML Usage</strong></p>
12
  <p>To use your CPU using GGML(Llamacpp) you only need the single .bin ggml file.</p>
13
+ <p>Oobabooga: If you require further instruction, see <a href="https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md">here</a></p>
14
+ <p>KoboldAI: If you require further instruction, see <a href="https://github.com/LostRuins/koboldcpp">here</a></p>
15
 
16
  <p><strong>Training Parameters</strong></p>
17
  <ul><li>num_epochs=10</li><li>cutoff_len=512</li><li>group_by_length</li><li>lora_target_modules='[q_proj,k_proj,v_proj,o_proj]'</li><li>lora_r=16</li><li>micro_batch_size=8</li></ul>