MetaIX's picture
Update README.md
7f54478
|
raw
history blame
2.51 kB

Information

GPT4-X-Alpasta-30b working with Oobabooga's Text Generation Webui and KoboldAI.

This is an attempt at improving Open Assistant's performance as an instruct while retaining its excellent prose. The merge consists of Chansung's GPT4-Alpaca Lora and Open Assistant's native fine-tune.

What's included

GPTQ: 2 quantized versions. One quantized --true-sequential and act-order optimizations, and the other was quantized using --true-sequential --groupsize 128 optimizations (coming soon)

GGML: 1 quantized version. One quantized using q4_1

GPU/GPTQ Usage

To use with your GPU using GPTQ pick one of the .safetensors along with all of the .jsons and .model files.

Oobabooga: If you require further instruction, see here and here

KoboldAI: If you require further instruction, see here

CPU/GGML Usage

To use your CPU using GGML(Llamacpp) you only need the single .bin ggml file.

Oobabooga: If you require further instruction, see here

KoboldAI: If you require further instruction, see here

Benchmarks

--true-sequential --act-order

Wikitext2: 4.998758792877197

Ptb-New: 9.802155494689941

C4-New: 7.341384410858154

Note: This version does not use --groupsize 128, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.

--true-sequential --groupsize 128

Wikitext2: TBD

Ptb-New: TBD

C4-New: TBD

Note: This version uses --groupsize 128, resulting in better evaluations. However, it consumes more VRAM.