Memory requirements to take advantage of full context window

#23
by andrewrreed HF staff - opened

Thanks for your awesome work on this! What is the recommended hardware setup / inference server configs for running this model while taking advantage of full 1M context window?

My understanding is that since this model was fine-tuned with step-wise RoPE theta scaling, we wouldn't actually apply any RoPE scaling args for inference, right? Doesn't that come with the drawback of huge VRAM requirement to actually use the model with these long sequence lengths? Like >1TB of VRAM needed for 262k... making the model somewhat unusable for the gpu-poor?

Seems like dynamic RoPE scaling on the base Llama3-8B model alone would be more resource efficient for achieving really long contexts... though I guess with the tradeoff of lower quality at those long contexts....

Have you ablated the long context performance against this baseline, dynamic RoPE scaling scenario? Thanks in advance for you thoughts on this!

Probably yes it requires lots of VRAM, when I used 262k, I tried with different context length while running A100 with 40 gb memory and I was only capable to give 125k input and 39 gb of memory was used during inference.

Sign up or log in to comment