Slow and Gibberish when inferencing

#20
by eastwind - opened

Here is the code

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model = "tiiuae/falcon-40b-instruct"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto",
)

text = f"""
User: What is quantum tunneling?
Assistant:
"""

sequences = pipeline(
    text,
    max_length=200,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

When running this on 4 Tesla v100s I get this output

(Result: 
User: What is quantum tunneling?
Assistant:
<function(4
(1"(`var

//"`""31"
1

44<1
<function<function(`<`2

"(<(function(1
<var2(
(4<(
4
1(
<
12""(3'1(3
44<"34<
1`(
各种=(2"
function>
21<function"


<p`function
(2'4'3'4
22
11
123
//(((4"`3
1'2'
function(4

"3(2(4-"(
"``
(""`("(4""'21
<(("12(2``)

side note took 10mins for this btw.

use_cache = False

Seems to have fixed the issue for now, but I assume that this makes it very slow as it took 5 mins for 100tokens

v100 does not support bfloat16, thus it probably is using fp32, resulting a decrease of several times of tflops. Change to A10G or better like A100 GPU

@zkdtckk I had it running on a machine with 8 x A100 80GB GPUs.
It took some 10 minutes to give a response to that Giraffe prompt given in the example script.
Did you fare better? If so, how?

It also produced gibberish as described by @eastwind .
But I am not sure I want to try the use_cache option.
If it is that slow it is far to costly to use.

@captain-fim 10min on 8 A100 (p4d?) is really slow, there should be something wrong. It took me a few mins on 8 A10 GPU setup to run the Giraffe prompt.
The speed of inference is really a problem for this model, we need to figure out a way to speed it up.

@zkdtckk

10min on 8 A100 (p4d?) is really slow, there should be something wrong.

Yes, that is what I thought. And also it will be even slower with turning the cache off as @eastwind proposes, will it not? :-(
Did you need to turn the cache off, too, @zkdtckk ?
I still hope something is wrong with our setup.

I think it's an issue with multi GPU Inference. Due to the custom code implemented by falcon. Maybe there's an issue with sharding the model over multiple gpus using the device map from accelerate.

@captain-fim it didn't take that long. I used 4 v100s. Took 3 mins for 200 tokens

https://github.com/huggingface/transformers/issues/15399

This mentions that it might be an issue with fp16. Or bfloat16. Like @zkdtckk mentioned. I will try tomorrow without quantisation.

Working theory is that the shared KV cache doesn't work for multi GPU.

Technology Innovation Institute org
edited Jun 9, 2023

We would recommend using Text Generation Inference for fast inference with Falcon. See this blog for more information.

@FalconLLM Have we benchmarked response time of falcon 40-b when we deploy the endpoint over sagemaker?

Sign up or log in to comment