File size: 601 Bytes
c5248dd
 
 
3a0bb89
55134b2
 
3a0bb89
 
cd023f5
 
3a0bb89
 
 
cd023f5
3a0bb89
 
cd023f5
 
3a0bb89
 
cd023f5
 
3a0bb89
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
license: other
---

NOTE: Safetensor 4bit quant will be uploaded within the day. Cheers.

This is a GPTQ 4 bit quant of ChanSung's Elina 33b.
This is a LlaMa based model; LoRA merged with latest transformers conversion.
Quantized with GPTQ, --wbits 4 --act-order --true-sequential --save_safetensors c4.

128 groupsize was not used so those running this on a consumer GPU with 24GB VRAM can run it at full
context (2048) without any risk of OOM.


Original LoRA:
https://huggingface.co/LLMs/Alpaca-LoRA-30B-elina


Repo:
https://huggingface.co/LLMs


Likely Author:
https://huggingface.co/chansung