Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
custom_code
text-generation-inference

Dtype error in generation text

#45
by cotran2 - opened

Error I receive:

File ~/.cache/huggingface/modules/transformers_modules/mosaicml/mpt-7b-instruct/85c1f1c201273bbfee661d4a2f8307c95f8956c9/attention.py:58, in scaled_multihead_dot_product_attention(query, key, value, n_heads, past_key_value, softmax_scale, attn_bias, key_padding_mask, is_causal, dropout_p, training, needs_weights, multiquery)
     56 if dropout_p:
     57     attn_weight = torch.nn.functional.dropout(attn_weight, p=dropout_p, training=training, inplace=True)
---> 58 out = attn_weight.matmul(v)
     59 out = rearrange(out, 'b h s d -> b s (h d)')
     60 if needs_weights:

RuntimeError: expected scalar type BFloat16 but found Float

Code for initialization:

import torch
import transformers
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')

name = 'mosaicml/mpt-7b-instruct'

config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'torch'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!



model = transformers.AutoModelForCausalLM.from_pretrained(
  name,
  config=config,
  #load_in_8bit=True,
  torch_dtype=torch.bfloat16, # Load model weights in bfloat16
  trust_remote_code=True,
  device_map="auto"
)

input_ids = tokenizer(fmt_ex, return_tensors="pt").input_ids
input_ids = input_ids.to(model.device)

generate_params = {
    "max_new_tokens": 1024, 
    "temperature": 0.1, 
    "top_p": 1.0, 
    "top_k": 0, 
    "use_cache": True, 
    "do_sample": True, 
    "eos_token_id": 0, 
    "pad_token_id": 0
}
generated_ids = model.generate(input_ids, **generate_params)
output = tokenizer.decode(generated_ids.cpu().tolist()[0], skip_special_tokens=True)

for line in output.split('\n'):
    print(line)
Mosaic ML, Inc. org

You're running the model in lower precision (fp16 or bf16), but alibi bias needs to be in fp32 or else the model perf degrades. To get those to work together correctly, you should use autocast. Here is an example of how we had to update our tests to get this right: https://github.com/mosaicml/llm-foundry/pull/329/files#diff-3b8a58a4d021803b3171b886bb9162fd659e671131f3f61036f9210cb5d0bc7cR809

sam-mosaic changed discussion status to closed

Sign up or log in to comment