VRAM usage for full 128k tokens

#5
by Hypersniper - opened

Any idea how much more vram you'll need to get the full 128k if you load the model in 4bit?

This comment has been hidden
NousResearch org

Currently, it is about 4 80GB A100s, so 320GB VRAM, we are currently working on reducing this with better optimizations...

no need, ill start my delorean to travel to 2030 and get some gpus to run it locally

Hi and thanks for your work, this is amazing.
But Could you help me with my question?
So, i small work with ai and models, and i trust look for information about multicards output for model, but i seen only info about finetune using multicard.
Maybe, you can take me links or some documentation about it.

Currently, it is about 4 80GB A100s, so 320GB VRAM, we are currently working on reducing this with better optimizations...

Perhaps using flash attention could help

NousResearch org
β€’
edited Nov 6, 2023

Currently, it is about 4 80GB A100s, so 320GB VRAM, we are currently working on reducing this with better optimizations...

Perhaps using flash attention could help

It is already using flash attention. However if you are focused solely on inference use cases, dedicated inference kernels in libraries such as vLLM, ExLlama and llamacpp would help reduce the VRAM requirements significantly. I've heard rumours that it should be possible to run 128k context with llamacpp on a single 40GB GPU...

Anyone got it to work on decent machines with (almost) the full context and could share their experience? I'm trying on 2xA100 with 60k context and I get OOMs when the attention masks are calculated. The deployment uses flash attention and it's on 4 bits.

LE:

I managed to get it to work through llamacpp and 4xA10G. The stats are as follows:

  • 60k input -> 4 minutes time for a 4k response generation
  • 120k input -> 16 minutes time for a 4k response generation

The model was quantized to 4bits.

Sign up or log in to comment