TehVenom commited on
Commit
63a9f49
1 Parent(s): 1b8f33e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -184,6 +184,7 @@ for those of you familiar with the project.
184
  The current Pygmalion-13b has been trained as a LoRA, then merged down to the base model for distribuition.
185
 
186
  It has also been quantized down to 8Bit using the GPTQ library available here: https://github.com/0cc4m/GPTQ-for-LLaMa
 
187
  ```
188
  python llama.py .\TehVenom_Metharme-13b-Merged c4 --wbits 8 --act-order --save_safetensors Metharme-13b-GPTQ-8bit.act-order.safetensors
189
  ```
 
184
  The current Pygmalion-13b has been trained as a LoRA, then merged down to the base model for distribuition.
185
 
186
  It has also been quantized down to 8Bit using the GPTQ library available here: https://github.com/0cc4m/GPTQ-for-LLaMa
187
+
188
  ```
189
  python llama.py .\TehVenom_Metharme-13b-Merged c4 --wbits 8 --act-order --save_safetensors Metharme-13b-GPTQ-8bit.act-order.safetensors
190
  ```