Expected minimum hardware requirements for inference?

#6
by zeroing - opened

Title is self-explanatory. 😁

Replit org

Not easy to give you a reliable answer, given that "minimum" requirement could be also CPU inference... if you are willing to wait minutes for a few tokens πŸ˜„
That said, we are hosting our demo on an NVIDIA A10G, and it looks pretty fast!
Further quantization with LLM.int8() would help you load the model even on smaller GPUs.

Not easy to give you a reliable answer, given that "minimum" requirement could be also CPU inference... if you are willing to wait minutes for a few tokens πŸ˜„
That said, we are hosting our demo on an NVIDIA A10G, and it looks pretty fast!
Further quantization with LLM.int8() would help you load the model even on smaller GPUs.

hello, is the demo hosted naively with transfomers as described in model card ? I thought it is hosted with optimized method such as fastertransfomer.

It's hosted natively on the GPU as described in the model card with bfloat16 precision, but without any flash attention β€” i.e. the attn_impl kwarg defaults to 'torch'.

It's hosted natively on the GPU as described in the model card with bfloat16 precision, but without any flash attention β€” i.e. the attn_impl kwarg defaults to 'torch'.

When I set attn_impl to flash , the flash_attn can not be matched with alibi , so wpe layer must be newly initialized.
My question is :

  • Whether you will release the flash_attn version model ? That model have the wpe layer that match the flash_attn config.
  • How faster is the flash_attn than torch ? Will it be faster in the inference process ?
  • Weather you will release faster_transformer version of code ? It will benefit many people.

ggml just merged CPU inference implementation:
https://github.com/ggerganov/ggml/tree/master/examples/replit

it's pretty fast on my M1 16GB MB Air

Sign up or log in to comment