Text Generation
Transformers
PyTorch
English
gptj
Inference Endpoints

8bit quantization

#12
by deleted - opened

I wonder if there are any plans to produce an 8bit (quantized) version of GPT-JT, as was done for the original GPT-J in hivemind/gpt-j-6B-8bit. Could address steep hardware requirements as in discussion #9.

hivemind provides the script they used to quantize GPT-J (convert-gpt-j.ipynb in the model repo), but my attempt was unsuccessful.

deleted changed discussion title from Quantization of GPT-JT to Quantization
deleted changed discussion title from Quantization to 8bit quantization
Together org

We have one in the lab, and does quite well on benchmarks. We'll release it once we've done more performance work on it.

I've used from_pretrained(... , load_in_8bit=True) and it seems to work. Haven't benchmarked it yet. Memory-wise it seems to stay under 10GB this way.

We have one in the lab, and does quite well on benchmarks. We'll release it once we've done more performance work on it.

This is exciting news! Any idea how the performance stacks up to the regular version? (if you can share that already of course)

Looking forward to the release of the 8-bit quantized model..

Considering the UL2 training objective used in this model, would adjustments need to be made to fine-tuning this model, or is it no different than fine-tuning regular GPT-J?

Sign up or log in to comment