KinglyCrow commited on
Commit
1f0403c
1 Parent(s): 64be886

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -8,6 +8,8 @@ Make sure to run with FlashAttention like in https://github.com/huggingface/text
8
 
9
  Also note the GPTQ 4bit quantized version seems to run about 2x slower compared to the 8bit bitsandbytes version within text-generation-inference, typically we were seeing about 600-800ms latency for token generation for 8bit bitsandbytes whereas we're seeing about 1.2-1.7s with the 4bit GPTQ version.
10
 
 
 
11
  This was quantized using:
12
 
13
  `text-generation-server quantize tiiuae/falcon-40b-instruct /tmp/falcon40instructgptq --upload-to-model-id AxisMind/falcon-40b-instruct-gptq --trust-remote-code --act-order`
 
8
 
9
  Also note the GPTQ 4bit quantized version seems to run about 2x slower compared to the 8bit bitsandbytes version within text-generation-inference, typically we were seeing about 600-800ms latency for token generation for 8bit bitsandbytes whereas we're seeing about 1.2-1.7s with the 4bit GPTQ version.
10
 
11
+ VRAM usage is a little over 25gb for this 4bit quantized version, compared to 47gb for the 8bit and 80gb for full.
12
+
13
  This was quantized using:
14
 
15
  `text-generation-server quantize tiiuae/falcon-40b-instruct /tmp/falcon40instructgptq --upload-to-model-id AxisMind/falcon-40b-instruct-gptq --trust-remote-code --act-order`