kaiokendev commited on
Commit
41fef8d
1 Parent(s): e218269

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -18,8 +18,9 @@ Trained against LLaMa 30B 4-bit for 3 epochs with cutoff length 1024, using a mi
18
  - Alpaca GPT4
19
 
20
  ### Merged Models
21
- - [https://huggingface.co/gozfarb/llama-30b-supercot-ggml](https://huggingface.co/gozfarb/llama-30b-supercot-ggml)
22
- - [https://huggingface.co/ausboss/llama-30b-supercot](https://huggingface.co/ausboss/llama-30b-supercot)
 
23
 
24
  ### Compatibility
25
  This LoRA is compatible with any 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins
 
18
  - Alpaca GPT4
19
 
20
  ### Merged Models
21
+ - GGML 30B 4-bit: [https://huggingface.co/gozfarb/llama-30b-supercot-ggml](https://huggingface.co/gozfarb/llama-30b-supercot-ggml)
22
+ - 30B (unquantized): [https://huggingface.co/ausboss/llama-30b-supercot](https://huggingface.co/ausboss/llama-30b-supercot)
23
+ - 30B 4-bit 128g CUDA: [https://huggingface.co/tsumeone/llama-30b-supercot-4bit-128g-cuda](https://huggingface.co/tsumeone/llama-30b-supercot-4bit-128g-cuda)
24
 
25
  ### Compatibility
26
  This LoRA is compatible with any 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins