example code returns strange result

#2
by Ivan-oO - opened

Hy! I try to use your model, and it doest't work :/

my libs version:

transformers==4.41.2
torch==2.1.0a0+41361538.nv23.6

code is here:

from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
path_to_model = "ai-forever/RuM2M100-418M"
model = M2M100ForConditionalGeneration.from_pretrained(path_to_model)
tokenizer = M2M100Tokenizer.from_pretrained(path_to_model, src_lang="ru", tgt_lang="ru")
sentence = "прийдя в МГТУ я был удивлен никого необноружив там…"
encodings = tokenizer(sentence, return_tensors="pt")
generated_tokens = model.generate( **encodings, forced_bos_token_id=tokenizer.get_lang_id("ru"))
answer = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
print(answer)

and result is

['a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a.']

can you suggest some hint for debug this situatuin?)

definitely not model bug, same output for my test case with "facebook/m2m100_418M" model. I guess it is my transformers setup problem

solved problem, here it is my working example =) it works on jetson nano orin

from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer

path_to_model = "ai-forever/RuM2M100-418M"
model = M2M100ForConditionalGeneration.from_pretrained(path_to_model, device_map='cuda', low_cpu_mem_usage=True)

tokenizer = M2M100Tokenizer.from_pretrained(path_to_model, src_lang="ru", tgt_lang="ru")
sentence = "прийдя в МГТУ я был удивлен никого необноружив там…"
encodings = tokenizer(sentence, return_tensors="pt").to('cuda')

generated_tokens = model.generate( **encodings, forced_bos_token_id=tokenizer.get_lang_id("ru"))
answer = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)

print(answer)

Sign up or log in to comment